Invented by Matthew E. Phillips, Dylan T. Bergstedt, Nandan Thor, Jaehoon Choe, Michael J. Daily, GM Global Technology Operations LLC

The development of autonomous driving technology has been one of the most significant advancements in the automotive industry in recent years. With the increasing demand for autonomous vehicles, there has been a growing need for methods and apparatus for scenario creation and parametric sweeps in the development and evaluation of autonomous driving. Scenario creation is the process of creating a virtual environment that simulates real-world driving conditions. This allows developers to test their autonomous driving technology in a safe and controlled environment before deploying it on the road. Parametric sweeps, on the other hand, involve testing the technology under different conditions to identify its strengths and weaknesses. The market for methods and apparatus for scenario creation and parametric sweeps in the development and evaluation of autonomous driving is expected to grow significantly in the coming years. According to a report by MarketsandMarkets, the market for simulation software for autonomous vehicles is expected to reach $1.5 billion by 2025, growing at a CAGR of 15.9% from 2020 to 2025. The increasing demand for autonomous vehicles is one of the major factors driving the growth of this market. With the rise in traffic congestion and accidents caused by human error, autonomous vehicles are seen as a solution to these problems. This has led to increased investment in autonomous driving technology by automotive manufacturers and technology companies. The development of autonomous driving technology requires a significant amount of testing and validation. This is where scenario creation and parametric sweeps come in. By creating virtual environments that simulate real-world driving conditions, developers can test their technology in a safe and controlled environment. This allows them to identify any issues and make improvements before deploying the technology on the road. There are several companies that offer methods and apparatus for scenario creation and parametric sweeps in the development and evaluation of autonomous driving. Some of the key players in this market include ANSYS, Siemens, MathWorks, and dSPACE. These companies offer simulation software and hardware that allow developers to create virtual environments and test their technology under different conditions. In conclusion, the market for methods and apparatus for scenario creation and parametric sweeps in the development and evaluation of autonomous driving is expected to grow significantly in the coming years. With the increasing demand for autonomous vehicles, there is a growing need for testing and validation of autonomous driving technology. Scenario creation and parametric sweeps allow developers to test their technology in a safe and controlled environment, which is essential for the development of reliable and safe autonomous driving technology.

The GM Global Technology Operations LLC invention works as follows

The present application relates in general to methods and devices for evaluating driving performances under various driving scenarios and conditions. The application, more specifically, teaches a test method and apparatus that allows a driver to repeatedly test a driving situation while changing a parameter, such as the fog level, so they can evaluate the driving system’s performance in changing conditions.

Background for Methods and apparatus for scenario creation and parametric sweeps in the development and evaluation autonomous driving systems”.

The present application is a general one that relates to vehicle controls and autonomous vehicles. The application, more specifically, teaches a method for evaluating, quantifying, and measuring the complexity of situations, events, and scenarios created within simulations as a way to assess and train a cognitive driving model.

In general an autonomous vehicle is one that can monitor external information using vehicle sensors, recognize a road condition in response to external information and manipulate a vehicle’s owner. The software for autonomous vehicles is evaluated, tested and improved by putting it through various scenarios. This allows us to evaluate the software’s performance and determine its frequency of failure and success. In order to train and analyze new systems, it is important to expose vehicle software to various challenging situations. In a simulation environment, it is important to verify and validate the system to make sure that the system, when deployed, will not fail in complex situations. The system can also be used to test other autonomous control systems, provided that all the parameters are correctly simulated within the vehicle. For example, automated brake systems. It is important to simulate as many driving conditions and scenarios as possible to identify any system weaknesses and improve the system.

The above information is disclosed only to enhance understanding of the background for the invention. It may therefore contain information which does not constitute the prior art and is already known to a person with ordinary skill in the arts in this country.

Embodiments in accordance with the present disclosure offer a number advantages. As an example, embodiments of the present disclosure can be used to test autonomous vehicle software and subsystems. This system can be used to test any control system software, and is not restricted to autonomous vehicles.

The present invention provides an apparatus that includes a sensor interface to generate sensor data to be coupled to a vehicle’s control system; a control system to receive control data from the system; a memory to store a scenario in which the scenario is associated with two parametric variations, and a simulation tool for simulating the first driving situation in response the the scenario, first parameter variation and control data, and for assigning the first performance metric based on the simulation.

According to another aspect of the invention, “a method comprises receiving a driving situation, a parameter variation first and a variation second, simulating a driving scene using the first variation, evaluating the performance of a driver in response the driving scenarios, the first variation and the first control data and assigning the first performance metric as a response to that driver performance. The second parameter variation is used to simulate the driving, evaluate the performance of a driver in response the driving, the second variation and the second control data and assign

The following detailed description of preferred embodiments, when viewed in conjunction with the accompanying illustrations, will reveal the above advantage as well as other advantages and features.

The following detailed description has a purely exemplary nature and does not intend to limit disclosure, application or use. The following detailed description and the background information are not intended to bind anyone. The algorithms, software, and systems of the invention are particularly useful for vehicles. As those in the know will appreciate, the invention can have other uses.

Driving complexity is usually measured by comparing it to certain scenarios, rules of the roads and driving goals. In a scenario involving a left-turn, for example, timing the decision to make the turn at the T-junction can be a crucial moment. This is because there are many factors that contribute to complexity and ultimately influence the behaviors and outcomes. In another example, a pedestrian or driver in or near an stopped vehicle could wave traffic past if it is blocking part of the road. In order to successfully navigate the stopped vehicle and oncoming traffic, the driver must observe, assess and judge the potential dangers and complex factors based on the other drivers’ behaviour while negotiating this complicated situation.

In determining and quantifying the driving complexity, it is important to focus on the development of AI-based or deep learning-based autonomous driving systems, the parametric variations and the quantification in scenarios that have the worst performance. Theoretically, complexity can be measured in different ways. Researchers can quantify driving performance using certain behavioral measures, like the distance between a vehicle and the center of a lane or distance from other traffic. A human’s driving performance can also be measured using behavioral and neurophysiological metrics that measure engagement, performance and factors that contribute to poor performance. “Behavioral measures include decision-making time, behaviors, perceptual discrimination and time spent on task.

Human error and procedural errors are the main causes of driving failures. Neural measures using electroencephalograms and other non-invasive brain imaging techniques include cognitive workload, engagement and the cognitive state of the operator, such as fatigue, spatial attention etc., in the subtask processes. Driving control inputs are also used to evaluate performance based on a target condition or ideal decision or path. motor tracking errors). In addition to prior experience, the driver’s access to knowledge and experience is also a critical factor in decision-making performance. In designing scenarios, the decision processes of drivers in highly complicated situations will be emphasized. This includes the complexity of the scenarios as well as the grading metrics for both the human in loop (HIL), and autonomous driving examples. The initial goal is to train the cognitive models using the “best” examples. The goal is to train the cognitive model using the?best?

In research on HIL driver behavior, complex measures are often used to assess the performance and quantify traffic situations. This is dependent on environmental factors and metrics, such as traffic density and driver agent behavior. It also depends on occupancy and mean speeds ground truths generated by traffic control camera, as well as the overall configuration of roads and traffic. To train the cognitive system, these variables need to be automated in order to create scenarios that can generate accurate semantic data and novel behaviors. The current system uses a sweep of parameters to create scenarios of varying difficulty and to provide rapid iterations to scenarios which would be impossible to replicate in a real-world driving context.

A “scenario” contains an “episode”. “An ‘episode’ is a discrete group of ‘events. The cognitive system is organized at the highest level, the episode. The episode is the most basic memory unit within the cognitive system. It defines sequences of continuous phenomena which describe a particular vehicular scenario. A left turn at an intersection with a lot of traffic. Each episode is made up of a series of events. These are stereotyped, specific occurrences which may be repeated across several episodes. In the example above, some events comprising the episode can include items such as the velocity of cross traffic, available gaps within the traffic, trajectory of other vehicles/pedestrians, or the current lane position of the self-vehicle. These discretized phenomena may not be unique to the circumstances surrounding the left turn, but could also be present in another episode. For example, pedestrians might be present in a scenario describing parking.

Percepts are observations made from the environment using data collected by sensors both internal and external. These percepts can be collected as “tokens”. The data is streamed in real-time from the sensor systems. These data are processed to create events. Perceptions contain critical information about the environment, including lane positions, turning lanes, and environmental conditions that are integral to driving. Other properties are also integral, including the object definition, allocentric speed, heading etc. and aspects of self-vehicle such as egocentric heading etc. Tokens are stream percepts which are assembled into discrete scenarios states called events. “Then, a series of observed events is learned by real-world driving or simulation, to create end-to-end scenario called episodes. These are stored as major units of memory.

It is important to collect as much data as possible, which can be done more easily through the passive collection of real-world driving scenarios. This is because the cognitive processor requires relevant and practical experience to better learn how to drive. Cognitive processors, unlike deterministic rule based systems, must be exposed as much as possible to real-world data, including subtle variations to a scenario it may need to evaluate in a deployed setting. In situ exposure is time-consuming and resource-intensive. It is also inconsistent. This means that desired scenarios must be discovered by chance in unknown iterations in human-assisted drive, or painstakingly reproduced in a closed circuit. In order to store episodes faster than real time, realistic simulations have allowed rapid iteration on various world states. These include road lane configurations, traffic scenarios and self-vehicle behaviors. This creates a richer episodic database and a larger library of scenarios that the cognitive system can use to learn and generate semantic relations, which is critical for non-rule based, generative agency.

During simulation tokens can be generated in real time via streaming percepts over an interface. Data on the vehicle and environment are collected for each simulation step, and then streamed into an output socket to be collected into the cognitive model. The cognitive model then packages token data collected into events. Percepts can be collected by vehicle “self port” Data can be tokenized per vehicle and used to create an allocentric array of position/velocity for each agent in the scenario. The cognitive model reconstructs environmental states such as road configurations using data from lane markers sensors. These sensors define the edge of valid lanes, and provide information about intersections and thoroughfares that is learned by the cognitive system as a component of episodes. Devices that do not provide ground truth can be tokenized per device. The “sensor-world” The’sensor-world’ will replace the self-port data and will be subjected to realistic perturbations in the perceptual stream such as sensor obstruction, malfunction or signal attenuation caused by environmental factors like rain or fog. Simulated scenarios will first facilitate the delineation of specific events and episodes. The rapid collection of events and their automation through simulation variables’ parameter sweeps (e.g. The cognitive system will define the temporal edge of events by utilizing grammar algorithms and hierarchical grouping techniques.

Turning to FIG. In Figure 1, a left turn example is illustrated. The scenario is broken down into smaller tasks in order to calculate a meaningful and accurate complexity metric. On the road surface 110 of the T junction, a vehicle is seen making a left turn while approaching from lower road S1. A complexity metric is introduced to allow a value of complexity to be calculated at the subtask-level during an autonomous driving task. The autonomous vehicle can compare scenarios and take action based on similar situations by extracting the complexity score from data generated during simulation. The complexity can be used to determine the attentional demand for the autonomous system. The cognitive model uses the complexity score, as well as grading scores to guide decisions. This exemplary embodiment evaluates a scenario involving a left-hand turn. This scenario can be iterated based on factors such as traffic density, the number of pedestrians and weather conditions. To measure complexity, the task of turning left is divided into subtasks. In this example, the task of turning left is divided into four subtasks: S1, S2,S3, and S4. To allow scalability in other scenarios, the features of subtasks were found to create a set guidelines for breaking the data down into subtasks. The S1 endpoint of Subtask 1 is found by determining the point in time when the car’s speed drops below an acceptable stopping speed. The S2 endpoint of Subtask 2 is found when the vehicle exceeds a specified acceleration velocity. Subtask 3’s S3 is found by identifying the point at which the x coordinate of the car ceases to change. The endpoint of a subtask is assumed to be the start of the next. The features that define the end of subtasks are scalable for most left-turn scenarios, but will depend on how aggressive the driver is, their familiarity with the roads, and the simplicity of the task.

The reason for breaking down the left-turn into subtasks was because the complexity of the task changes depending on where it is measured. Splitting the task into smaller tasks allows you to observe how the complexity of the task changes as it progresses. The task can be divided into subtasks and the complexity of each subtask can be calculated. The complexity of subtasks 1 and 2 is different. Within a subtask features are usually the same. Measuring differences between subtasks will give a significant temporal difference. Subtasks are generally determined by the difference in features. This allows for a comparison of temporal complexity between subtasks. There may be certain features that are different within a given subtask, but the fundamental way subtasks can be broken down is to minimize these feature changes. It may be important to determine how the complexity of a task changes over time, or only in a subtask. Subtasks are continuous, and the endpoint of one subtask is the start of the next. Therefore, both large-scale and small-scale temporal complexity (throughout the task) can be calculated. We postulate that future efforts will extrapolate this complexity to a discrete or continuous event-related time domain.

Specific features are calculated from data from simulations and used to map the complexity parameters. The weather, for example, can have a significant impact on the perceptual complex and, to a lesser extent, the speed of the decision-making process. The number of lanes per subtask is a direct way to estimate complexity and measure the number alternatives. In this left-turn scenario, counting the lanes can be a good way to measure complexity as it shows that there are many alternatives at an intersection. The number of lanes is not as indicative of the complexity of a situation. For example, a highway with four lanes each way may be less complex than a two-way intersection. In some situations, the number of lane may be more important than the speed. This means that in the future the interaction between speed and number of lane features may be needed. In this scenario, the use of the number of lanes is based on two fundamental ideas: (1) it indicates an intersection and (2) it indicates which lanes you can merge into. Counting the lanes is a way to calculate complexity. The number of alternative measures can also be measured in the temporal dimension as opposed to the spatial dimension. The number of gaps between vehicles can be used in this regard to determine alternative options. In order to determine how many left turns were possible, it is important to measure the number of gaps in the traffic for each subtask. The information can be combined with data on the number of lanes to provide a complete picture of alternative routes. In scenarios where it is not necessary to drive across traffic, the number and size of the alternatives may not scale as well with the traffic gaps. Then, another feature of temporal alternative must be implemented. “It is crucial to determine the number of spatial and temporal alternatives regardless of driving situations.

Criticality is fundamentally a subjective measure and can be thought about as the value expected of the risk associated with not making a choice. If risk is higher in one task than in another, for example, when driving through an intersection is high, then criticality will be high. Criticality is high in the case of a left turn when you are approaching the stop sign as well as when you make the turn. In this scenario, criticality increases when the velocity is low. For example, if you stop at a traffic light and then start a left turn. In this scenario, criticality is measured by the inverse velocity. Criticality is the scenario-specific measure of complexity. In the case where you are driving on the highway, it can increase with speed or when you slow down to leave the highway. Criticality is different for each task and situation.

Weather is a direct indicator of perception complexity. The perception of complexity is dramatically increased in conditions such as heavy snowfall or fog. The number of objects picked up by the sensors is another way to measure perceptual complex. The number of objects the car sees in a subtask can be used to measure the complexity of the perception. This feature is scalable but the weight can change depending on the situation. “For example, the number and size of the objects are more important than weather conditions when crossing an intersection on a sunny day. However, this is not the case when driving in snowy conditions on a winding road with few objects.

Click here to view the patent on Google Patents.