Invented by Bibhrajit HALDER, SafeAI Inc

The market for techniques to consider uncertainty when using artificial intelligence models is rapidly growing as businesses and organizations recognize the importance of understanding and managing the risks associated with AI. Artificial intelligence has become an integral part of many industries, from healthcare and finance to manufacturing and transportation. However, as AI models become more complex and powerful, there is a growing need to address the issue of uncertainty. Uncertainty refers to the lack of complete knowledge or predictability in AI models, which can lead to incorrect or unreliable results. One of the main challenges in AI is that models are often trained on historical data, which may not accurately represent future scenarios. This can result in models making incorrect predictions or decisions when faced with new or unfamiliar situations. Uncertainty also arises from the inherent limitations of AI algorithms, such as their inability to fully understand context or interpret ambiguous information. To address these challenges, researchers and developers are working on various techniques to consider uncertainty in AI models. One such technique is probabilistic modeling, which assigns probabilities to different outcomes or predictions. By incorporating uncertainty into the model, probabilistic modeling allows for a more nuanced understanding of the reliability of AI predictions. Another technique gaining traction is ensemble modeling, where multiple AI models are combined to make predictions. By aggregating the outputs of different models, ensemble modeling can provide a more robust and reliable prediction, while also capturing the uncertainty inherent in each individual model. Additionally, researchers are exploring techniques such as Bayesian inference, which uses prior knowledge and data to update and refine predictions as new information becomes available. This iterative approach allows AI models to continuously learn and adapt, reducing uncertainty over time. The market for techniques to consider uncertainty in AI models is driven by the increasing demand for reliable and trustworthy AI solutions. Businesses and organizations are realizing that the potential risks associated with AI, such as biased or incorrect predictions, can have significant consequences. Therefore, they are actively seeking methods to quantify and manage uncertainty to ensure the responsible and ethical use of AI. Startups and established companies alike are capitalizing on this market opportunity by developing innovative solutions to address uncertainty in AI models. These solutions range from software platforms that integrate uncertainty techniques into existing AI frameworks to consulting services that help organizations implement and interpret uncertainty-aware AI models. Furthermore, regulatory bodies and industry standards are also recognizing the importance of uncertainty in AI. For instance, the European Union’s General Data Protection Regulation (GDPR) emphasizes the need for transparency and accountability in AI systems, including the ability to explain the logic behind AI decisions and the level of certainty associated with them. In conclusion, the market for techniques to consider uncertainty when using artificial intelligence models is expanding rapidly. As AI becomes more prevalent in various industries, understanding and managing uncertainty is crucial for ensuring reliable and trustworthy AI solutions. The development of techniques such as probabilistic modeling, ensemble modeling, and Bayesian inference is driving innovation in this space. With increasing demand from businesses and organizations, as well as regulatory requirements, the market for uncertainty-aware AI solutions is poised for significant growth in the coming years.

The SafeAI Inc invention works as follows

An infrastructure for improving safety of autonomous systems is provided.” The autonomous vehicle management system controls the safety of autonomous functions and operations carried out by a machine or vehicle. The AVMS utilizes various artificial intelligence-based techniques (e.g. neural networks, reinforcement learning techniques (RL), etc.). As part of its processing, AVMS uses models and AI techniques. AVMS compares the statistical similarity (or dissimilarity) of an inferring point to the distribution in the training dataset. The AVMS generates a score (confidence score), which indicates how similar or different the inferring dataset is from the training dataset. This confidence score is used by the AVMS to determine how to use the AI model’s prediction.

Background for Techniques to consider uncertainty when using artificial intelligence models

Recently, we have seen a significant rise in the use and adoption of autonomous driving technology (e.g. autonomous vehicles). The adoption and application at large scale of Artificial Intelligence based technologies in the autonomous driving domain has played a part. AI-based applications for autonomous driving are used to perform tasks such as identifying the objects within the environment of an autonomous vehicle, making automatic decisions that affect the vehicle’s motion, etc. The current autonomous driving solutions that use AI systems do not have the necessary tools to ensure functional safety. This is a major obstacle to the adoption and use of these technologies.

The present disclosure is related to autonomous vehicles and, more specifically, to artificial intelligence-based and machine learning techniques used by an autonomous management system for an autonomous car to control operations in a safe way. Herein are described various inventive embodiments including methods, systems and non-transitory computers-readable storage media that store programs, code or instructions executable on one or more processors.

An infrastructure that increases the safety of autonomous system such as autonomous vehicles and autonomous machines is provided.” The autonomous vehicle management system, also known as a controller system, is configured to automatically perform one or more autonomous functions performed by a vehicle or machine in a manner that ensures the autonomous operations are carried out safely. Examples of autonomous operations are, without limitation: autonomous driving along a route, scooping or dumping operations, moving objects or materials (e.g. moving dirt from one place to another), lifting material, driving, rolling or spreading dirt, excavating or transporting objects or materials from one point or another point.

In certain embodiments, an autonomous vehicle management system receives sensor data from one of more sensors attached to the vehicle. The autonomous vehicle management system generates and updates an internal map of the vehicle based on this sensor data. This internal map contains information that represents the state of the autonomous vehicles environment (e.g. objects detected). The internal map is used in conjunction with other inputs such as the objective (e.g. change lanes, turn right/left, perform a special operation like digging or scooping etc.). The autonomous vehicle management system generates a plan of actions for the autonomous car based on safety considerations and other inputs. The plan of actions may include a sequence of planned actions that the autonomous vehicle will perform in order to reach the goal in a secure manner. The autonomous vehicle control system can then control one or several vehicle systems to execute the actions specified in the plan.

The autonomous vehicle management system can use a variety of artificial intelligence-based techniques, such as neural networks, reinforcement Learning (RL) techniques and others. As part of the processing, it uses models. The autonomous vehicle management system, for example, may use a Convolutional Neuronal Network (CNN), to identify objects within the autonomous vehicles’ environment, using sensor data captured by the vehicle (e.g. images taken by the vehicle mounted cameras). The autonomous vehicle management system can also use RL techniques to determine the actions that should be included in a plan of action for the autonomous vehicle in order to reach a goal in a secure manner.

The autonomous vehicle control system uses different techniques to enhance the safety of autonomous operations.” The autonomous vehicle management system, for example, may use different techniques to improve overall safety when using AI models in its decision-making process. In order to build and use an AI-based model (such as one based on supervised learning), there are two phases: a phase of training, where the AI model will be built and trained with a dataset; and a phase of inference, during which the AI model will be used to make predictions or inferences based on real-time data. AI models can sometimes make unpredictable errors when making predictions during the inference stage. This is due to the fact that the dataset used by the AI model for making predictions at the moment of inference differs from the dataset it was trained on. The autonomous vehicle control system accounts for this problem by performing processing. The autonomous vehicle system compares the statistical similarity (or dissimilarity) of an inferring point to the distribution data from the training dataset. The AVMS compares the statistical similarity (or dissimilarity) of an inferring point to the distribution in the training dataset. The autonomous vehicle system creates a score, also known as a “model confidence score”, that indicates how similar or different the inferring point is from the training data set. A score of high similarity can be generated when the inferring point is similar to or similar to the data set. Alternatively, a lower score of low similarity can be generated when the inferring point is dissimilar or different. The confidence score is a sanity-check that measures how much to trust the AI model’s prediction for an inferring point. This confidence score is used by the autonomous vehicle management system to decide how to use the AI model’s prediction. In cases where the AI model’s prediction based on an inferring point has a low score, which indicates that there is a large measure of dissimilarity between the two, it may be overridden by the autonomous vehicle system or ignored. This increases the safety of autonomous operations performed by autonomous vehicles and is not done by conventional AI systems.

In certain embodiments, an autonomous vehicle management system (or controller system) receives inferring data points from a vehicle sensor. The controller system uses an AI model, which has been trained with training data, to make a predictive analysis based on the inferring point. The controller system also calculates a score based on the inferring point. The score indicates the degree of similarity of the inferring point and training data. Based on the score, the controller system determines whether or not the prediction will be used to control the autonomous operation of a vehicle. The controller system can control the vehicle’s autonomous operation by identifying the action that the vehicle will perform. The controller system can determine whether to use the prediction to identify the action that is to be performed. A model that is built and trained with supervised learning techniques, such as neural networks, can be considered an AI model.

The controller system can use different techniques for calculating the score. In some embodiments, a distance measurement technique is used to compute the score by measuring the distance between an inferring datapoint and the distribution of training data. Distance measuring techniques can include the Mahalanobis, Generalized Mahalanobis, and Cosine similarity. In some embodiments the distance is calculated by plotting in vector space a number of data points from the training data. Then, plotting in vector space a point that corresponds to the inferring point, and then measuring the distance between this point and the distribution of data points.

The inferring data points received by the controller may be a sensor input from a vehicle sensor (e.g. on-board sensor or remote sensor). Examples of a sensor include, without limitation, a radar sensor, a LIDAR sensor, a camera, a Global Positioning System (GPS) sensor, a Inertial Measurement Unit sensor, a Vehicle-to-everything sensor, an audio sensor, a proximity sensor, or a SONAR sensor, and other sensors associated with the vehicle.

The controller system decides, based on the score, if a prediction using an AI model will be used to control the autonomous operation. In some embodiments, the system decides, using the score, whether there is a threshold level of similarity in the degree of similarity of the inferring point to the training data. If the degree is similarity between inferring data and training data, the controller system can decide to not use the prediction to make any decisions regarding the autonomous operation of the car.

In a situation where the inferring point of data and the training point are dissimilar (e.g. a score that represents a similarity below a threshold), the inferring point of data may be added to training data in order to create updated data. The model can then be retrained with the updated data.

The following specification, the claims and the accompanying drawings will make it easier to understand all of these features and embodiments.

In the following description, certain details are provided for clarification purposes in order to give a clear understanding of some inventive embodiments. It will become clear that different embodiments can be implemented without the specific details. Figures and descriptions are not meant to be restrictive. The word “exemplary” is used here to mean’serving as an example, instance or illustration. The word ‘exemplary’ is used in this document to mean “serving as a model, example, or illustration.” Any embodiment or design described as ‘exemplary’ herein is to be construed as preferred. “Any embodiment or design described herein as?exemplary?

References to ‘one embodiment’ are made throughout this specification. ?an embodiment,? If you use the phrase ‘an embodiment,? or similar language, it means that a certain feature, structure or characteristic described with respect to an embodiment is present in at least one of them. The phrase ‘in one embodiment’ is used to indicate this. The phrases?in one embodiment? Similar language in this specification does not always refer to the exact same embodiment.

The present disclosure is related to autonomous vehicles and, more specifically, to artificial intelligence-based and machine learning techniques used by an autonomous management system for an autonomous vehicle to control operations of the autonomous car in a safe way.

An infrastructure that increases the safety of autonomous system such as autonomous vehicles and autonomous machines is provided.” The autonomous vehicle management system, also known as a controller system, is configured to automatically perform one or more autonomous functions performed by a vehicle or machine in a manner that ensures the autonomous operations are carried out safely. Examples of autonomous operations are, without limitation: autonomous driving along a route, scooping or dumping operations, moving objects or materials (e.g. moving dirt from one place to another), lifting material, driving, rolling or spreading dirt, excavating or transporting objects or materials from one point or another point.

In certain embodiments, an autonomous vehicle management system receives sensor data from one of more sensors attached to the vehicle. The autonomous vehicle management system generates and updates an internal map of the vehicle based on this sensor data. This internal map contains information that represents the state of the autonomous vehicles environment (e.g. objects detected). The internal map is used in conjunction with other inputs such as the objective (e.g. change lanes, turn right/left, perform a special operation like digging or scooping etc.). The autonomous vehicle management system generates a plan of actions for the autonomous car based on safety considerations and other inputs. The plan of actions may include a sequence of planned actions that the autonomous vehicle will perform in order to reach the goal in a secure manner. The autonomous vehicle control system can then control one or several vehicle systems to execute the actions specified in the plan.

The autonomous vehicle management system can use a variety of artificial intelligence-based techniques, such as neural networks, reinforcement Learning (RL) techniques and others. As part of the processing, it uses models. The autonomous vehicle management system, for example, may use a Convolutional Neuronal Network (CNN), to identify objects within the autonomous vehicles’ environment, using sensor data captured by the vehicle (e.g. images taken by the vehicle mounted cameras). The autonomous vehicle management system can also use RL techniques to determine the actions that should be included in a plan of action for the autonomous vehicle in order to reach a goal in a secure manner.

The autonomous vehicle control system described in the disclosure uses different techniques to increase the safety of autonomous operations. The autonomous vehicle system can, for example, dynamically control the behavior of the sensors that are associated with the vehicle and provide the sensor data used by the system to process. For a sensor, the autonomous vehicle management system can dynamically change and control what sensor data is captured by the sensor and/or communicated from the sensor to the autonomous vehicle management system (e.g., granularity/resolution of the data, field of view of the data, partial/detailed data, how much data is communicated, control zoom associated with the data, and the like), when the data is captured by the sensor and/or communicated by the sensor to the autonomous vehicle management system (e.g., on-demand, according to a schedule), and how the data is captured by the sensor and/or communicated from the sensor to the autonomous vehicle management system (e.g., communication format, communication protocol, rate of data communication to the autonomous vehicle management system). The autonomous vehicle system builds the internal map based on sensor data from the sensors. By being able dynamically control the behavior and behavior of the sensors the information used to build or maintain the internal maps can be dynamically controlled.

As an example, the autonomous vehicle control system can simulate and evaluate various “what-if” scenarios as part of their decision-making process. scenarios. These what-if scenario’s project different behavioral predictions onto the map of the autonomous vehicle and can be used in order to determine the safest sequence of actions that the autonomous vehicle should take to achieve a specific goal. The autonomous vehicle management system can run different what-if scenarios, for example, to determine the best way to perform a turn. Each what-if simulation may simulate a unique behavior pattern (e.g. simulating varying speeds, paths, pedestrians, etc.). The autonomous vehicle management system will then determine the safest course of action for the autonomous vehicle based on these simulations.

As yet another example of safety improvement, the autonomous vehicle system can use different techniques to improve overall safety by using AI models in its decision-making process. In order to build and use an AI-based model (such as one based on supervised learning), there are two phases: a phase of training, where the AI model will be built and trained with a dataset; and a phase of inference, during which the AI-based model will make predictions or inferences based on real time data. AI models can sometimes make unpredictable errors when making predictions during the inference stage. This is due to the fact that the dataset used by the AI model for making predictions at the moment of inference differs from the dataset it was trained on. The autonomous vehicle control system accounts for this problem by performing processing. The autonomous vehicle system compares the statistical similarity (or dissimilarity) of an inferring point to the distribution data from the training dataset. The autonomous vehicle system compares the statistical similarity (or dissimilarity) of an inferring point to the distribution in the training dataset. The autonomous vehicle system creates a score, also known as a “model confidence score”, that indicates how similar or different the inferring dataset is. A score may indicate a high level of similarity when the inferring point is similar to a training dataset. Alternatively, a lower score may be indicated if the inferring point is dissimilar or different. The confidence score is a sanity-check that measures how much to trust the AI model’s prediction for an inferring point. This confidence score is used by the autonomous vehicle management system to decide how to use the AI model’s prediction. In cases where the AI model’s prediction based on an inferring point has a low score, which indicates that there is a large measure of dissimilarity between the two, it may be overridden by the autonomous vehicle system or ignored. This increases the safety of autonomous operations performed by autonomous vehicles and is not done by conventional AI systems.

Click here to view the patent on Google Patents.