Autonomous Vehicles – Eric Llyod Wilkinson, David McAllister Bradley, Uatc LLC

Abstract for “Systems and Methods for Speed Limit Context Awareness”

“Systems, methods and devices are designed to provide speed limit context awareness during autonomous vehicle operation. One example of computer-implemented methods for applying speed limit context awareness to autonomous vehicle operation is obtaining, using a computing system that includes one or more computing devices, a plurality features that describe a context and a vehicle’s state. The computing system determines a context response for an autonomous vehicle, which is based at most in part on the machine-learned model, the plurality features, and where the context response includes a derived velocity constraint for the vehicle. The computing system provides the context response to the autonomous vehicle’s motion planning application to help determine the vehicle’s motion plan.

Background for “Systems and Methods for Speed Limit Context Awareness”

An autonomous vehicle is one that can sense its surroundings and navigate without any human input. An autonomous vehicle is able to observe its environment with a variety sensors and can try to understand the environment by applying different processing techniques to the data. The autonomous vehicle can determine the best path to follow in such an environment based on its knowledge.

“Aspects, and benefits of embodiments of this disclosure will be described in part.

“One aspect of the present disclosure is directed at a computer-implemented way to apply speed limit context awareness in autonomous car operation. A computing system that includes one or more computing devices is used to obtain a variety of features that describe a context and a vehicle’s state. The computing system determines a context response for an autonomous vehicle, which is based at most in part on the machine-learned model, the plurality features, and wherein the context response also includes a derived speed constraint. The computing system provides the context response to the autonomous vehicle’s motion planning application to help determine the vehicle’s motion plan.

“Another example aspect is directed at an autonomous vehicle. An autonomous vehicle is equipped with a machine-learned model, which has been trained to determine the context response using features that are associated with a context or a state. A vehicle computing system, which includes one or several processors; one or multiple memories, and instructions that cause one or many processors to execute operations, is also part of the autonomous vehicle. These operations include the acquisition of a variety of features that describe the context and the current state of the autonomous car. Further, the operations include creating a feature vector that is at least partially based on the plurality features. Further, the operations include submitting the feature vector to a machine-learned modeling. Further operations include the extraction of a context response from the machine-learned modeling. The context response includes a derived velocity constraint for autonomous vehicles. Further, the operations include providing context responses to motion planning applications of the autonomous car to determine a motion program for the vehicle.

“Another example aspect is directed at a computing system. The computing system comprises one or several processors and one (or more) memories. These instructions, when executed by one or multiple processors, cause one or both processors to perform operations. These operations include the acquisition of multiple features that describe a context and a vehicle’s state. Further, the operations include the determination of a context response for an autonomous vehicle based at minimum in part on a machine learned model and the plurality features. The context response also includes a derived speed constraint. Further, the operations include providing context responses to motion planning applications of the autonomous car to determine a motion plan.

“Another aspect of the present disclosure is directed to various systems and apparatuses, nontransitory computer-readable mediums, user interfaces, electronic devices, and other systems.”

These and other features, aspects and benefits of different embodiments of this disclosure will be better understood if you refer to the following description. These accompanying drawings are included in and form a part this specification and illustrate examples of the present disclosure. They also serve to explain related principles.

“Reference will now be made in detail at embodiments, one (or more) of which is illustrated in the drawings. Each example is given to explain the embodiments and not limit the scope of the present disclosure. It is obvious to anyone skilled in the arts that the embodiments can be modified and adapted without departing from its scope. One embodiment may include features described or illustrated in another. This allows for the creation of a further embodiment. These modifications and variations are covered by the present disclosure.

“Generally, the present disclosure relates to systems and methods that use and/or leverage machine learning models to provide speed limit context awareness for determining and controlling autonomous vehicle travel speeds. The systems and methods described herein can be used to predict the maximum speed limit for an autonomous vehicle on a segment of its nominal path. This is based on the environment and context surrounding it. An autonomous vehicle computing system, for example, can gather information about the environment around the vehicle. An autonomous vehicle computing system can identify a variety of features that are associated with the context around the vehicle. These features could include information about aggregated information about objects within a region around the nominal route (e.g. pedestrians, vehicles and path boundaries) and/or features relative to the current vehicle position (e.g. posted speed limit, distances from traffic control devices, distances between queued objects and/or other objects and/or/or the similar). The autonomous vehicle computing system may include a machine learning model that can be trained to predict driving speeds for specific regions along the vehicle’s nominal route based at least partially on the features. The features can be inputted to the machine learning model (e.g. as a feature vector). This can then be used to analyze the machine-learned models to determine a maximum speed limit that will be applied to the autonomous vehicle in the future. Another example is that the machine-learned model could determine a speed limit for each segment of the route ahead of the autonomous car based on the features. Alternately, or in addition, the machine-learned models could offer a target offset from a nominal path based upon the obtained features. This could be used to optimize the positioning and alignment of the autonomous car on a roadway based around its surroundings. The autonomous vehicle can then use context awareness to limit the speed of travel and/or bias the lane position, thereby achieving safer driving behavior.

An autonomous vehicle might be required to travel a nominal path, in which case it may not be possible or desirable for the vehicle to operate at a posted speed limit. This is due to various conditions along the path segments. For example, a human driver with awareness of the environment may drive at the posted speed limit. An autonomous vehicle might have to travel in narrow areas of the nominal path because of objects (e.g. pedestrians, vehicles) within that area. property of the path (e.g. road borders, etc.) It may be possible for an autonomous vehicle to travel in the narrow area at a speed that is lower than the posted speed. Another example is that some areas of a nominal route may have one or more obstructions, such as large vehicles, buildings, signs, and parked vehicles. These obstructions may limit visibility in the area and it may be necessary for an autonomous vehicle to travel at a slower speed. Another example is when an autonomous vehicle must travel along a nominal path with a complex environment. This could include a busy street that has narrow travel lanes, many parked vehicles, pedestrians and bicyclists.

According to some embodiments, the vehicle computing system of an autonomous car can determine a maximum speed limit and/or offset from the nominal route for a segment of a path segment, at least partially based on the environment around it. One example is that a vehicle computing system may include one or more machine learning models that can receive a variety of features about the vehicle. These features can then be used to predict a maximum speed limit and/or a nominal offset for the segment. This can be applied by an autonomous vehicle at a specific moment, such as one second in the distant. The features may include information about pedestrians on a path segment and parked and/or moving cars in that segment. They can also describe the shape of the paths (e.g., road borders), the distance to next traffic control device, crosswalk, speed of preceding vehicles, and/or other such details. The machine-learned modeling can be used to analyze a variety of features and predict a maximum speed limit that the autonomous vehicle can apply at some point in future. This could include a speed limit to be applied one second in advance, speed limits to be applied for one, or more, segments of the path, etc. Alternately, some embodiments can output a target offset from a nominal path that will be applied at some future point (e.g. to be applied one second into the future, to apply at a determined distance ahead and/or similar) depending on the features.

An autonomous vehicle can be either a ground-based or air-based vehicle. There may also be a number of systems that control the operation of the vehicle. The autonomous vehicle may include one or more data capture systems (e.g. sensors, image capture devices), or one or more vehicle computing system (e.g. One or more vehicle control system (e.g. for controlling acceleration, steering, and braking) can be included in an autonomous vehicle. ), and/or similar. Data acquisition systems can capture sensor data (e.g. lidar data radar data image data, etc.). associating with one or more objects (e.g. pedestrians, vehicles etc.). That are located near the autonomous vehicle, and/or the sensor data associated to the vehicle’s path (e.g. path shape, boundaries or markings). Sensor data may include information about the location (e.g. in three-dimensional space relative the autonomous car) of points that correspond with objects within the autonomous vehicle’s surrounding environment (e.g. at one or more occasions). These sensor data can be provided to the vehicle computing systems by the data acquisition system.

“The vehicle computing system can also obtain map data which provides additional information about the environment around the autonomous vehicle. The map data may include information such as the identity and location roads, segments, buildings, and other items, and the direction and location of traffic lanes (e.g. The boundaries, location, direction, and so on. of a travel lanes, parking lane or turning lane and/or any other lanes within that travel way); traffic control data (e.g. the location and instructions for signage, traffic signals and/or traffic control devices); and/or any other data that may be useful in aiding the autonomous vehicle to comprehend and perceive its surroundings and their relationship.

The vehicle computing system may include one or more computing devices, as well as various subsystems, that can work together to sense the environment and create a motion plan to control the vehicle’s motion. The vehicle computing system may include a vision system, a prediction and motion planning systems. The vehicle computing system can process sensor data and generate a motion plan through the vehicle’s environment.

Based on sensor data, the perception system can identify one or more objects that are close to the autonomous vehicle. The perception system can, in certain implementations, determine the object’s state data. This describes its current state. The state data can be used to describe the object’s current location (also known as position); speed/velocity, acceleration, heading, size/footprint, and class (e.g. vehicle class versus pedestrian or bicycle class). Other state information. The perception system may be able to determine the state of each object in some cases over several iterations. The perception system can also update each object’s state data at every iteration. The perception system can track and detect objects, such as vehicles, pedestrians, or bicycles. The perception system can detect and track objects (e.g. vehicles, bicycles, pedestrians, etc.) that are located near the autonomous vehicle. This allows the system to present the world around the vehicle as well as its current state.

“The state data can be received from the perception system by the prediction system. Based on this data, the system can predict one or more future locations of each object. The prediction system can, for example, predict where an object will be within 5 seconds, 10 second, 20 second, and so on. An object can be predicted to follow its current trajectory based on its speed. Another example is modeling or other sophisticated prediction techniques.

“The motion planning system can create a motion plan for an autonomous vehicle using at least part of predicted future locations and/or state data provided by the perception system. The motion planning system can calculate a motion plan for an autonomous vehicle based on information about current and/or predicted future locations relative to objects.

“Although this is an example, in some implementations the motion planning system can calculate a cost function to each candidate motion plan for the autonomous car based at minimum in part on the locations and/or predicted future positions of the objects. The cost function, for example, can be used to describe the cost of following a candidate motion plan over time. The cost function could describe how the autonomous vehicle impacts with other objects and/or departs from a preferred route (e.g., predetermined travel routes).

“The motion planning system can calculate the cost of following a specific candidate pathway based on information such as the locations and/or future locations of objects. The cost function(s) can be used by the motion planning system to select or create a motion plan for an autonomous vehicle. The motion plan that minimizes cost functions can be chosen or determined in other ways, for example. The motion planning system can then provide the selected motion plan for a vehicle controller (e.g. actuators or other devices controlling acceleration, steering, and braking). To execute the selected motion plan.”

“More specifically, in some implementations the perception system or prediction system and/or motion plan system can determine one/more features associated with objects or the roadway in the environment surrounding the autonomous vehicle. This is based at minimum in part on state data. The features can be determined by the motion planning system, prediction system and/or perception system in some cases. These features may be indicative of the environment around the autonomous car along its nominal path and/or current state.

“For example, some implementations can determine the features from aggregate data about the vehicle’s position in relation to the environment and the relationship between objects in the environment and the nominal path. There are two types of features that can be used in some implementations: context features and autonomous vehicle features. Features that are only present once in a scene, and which are not related to the current position/state of an autonomous vehicle, can be considered autonomous vehicle features. Context features may include information about other objects within the scene, in a context area along or around the nominal path. One example is that there may be multiple tiled context areas along the nominal path in front of the autonomous vehicle. Each region can have a defined radius and length. Context features can also be determined for each one.

“In some cases, the environment surrounding the autonomous vehicle can be broken down into segments or bins. The nominal path for an autonomous vehicle can be broken down into multiple segments of different lengths, such as 10 m, 15 m, etc. Each segment is a context area for speed limit awareness. Each context area or segment can be used for information about objects, such as pedestrians and vehicles. ), path properties (e.g. nominal path geometrical property), road boundaries (e.g. distances to road/lane borders, etc. ), and/or similar. You can define context regions by defining start/end points on the nominal path and a radius around it that defines what objects are included in the context area.

“Some implementations of autonomous vehicle features (e.g. features that are determined with respect to the current autonomy vehicle state) may include one or more: a posted speed limit; distance from traffic control device (e.g. stop sign, yellow light and red light), etc. Distance from the nose of an autonomous vehicle to a nearest queue object (e.g. other vehicle, etc.). Along the nominal path; speed of closest queue object (e.g. speed toward path, speed on path, etc.). ); acceleration of nearest queue object (e.g. acceleration along path, acceleration towards path, etc. ; and/or similar.

“Context features, which are features that relate to a specific region of the nominal route, can include the following: distance from pedestrians; speed of nearest pedestrian on left/right, distance to nominal pathway of closest other pedestrian on left/right, distribution of pedestrians along the nominal trail; maximum curvature of nominal path in context area; closest distance between autonomous vehicle and road boundary in context region; average length to the right; distance to left; closest distance to road boundary to right and autonomous vehicle in context regional; average distance towards the right; actual camera image in the future path

“In particular, some implementations allow the vehicle computing system to divide the nominal path into multiple regions (e.g. n bins of length x) and compute statistics (e.g. associated with pedestrians and vehicles, etc.). Each region is located to the left and the right of the autonomous vehicle. The vehicle computing system can, for example, create a number bins for objects (e.g. pedestrians, vehicles, etc.). The vehicle computing system can create a nominal path and assign objects to the bins in order to calculate statistics and features. A vehicle computing system can determine the closest pedestrian and autonomous vehicle within a given area (e.g., within a bin). Some features, such as the autonomous vehicle features, can be determined without needing to be binned. These features may only appear once in an autonomous vehicle scene. You can also use a convolutional neural net feature extractor to obtain features in certain implementations.

“The vehicle computing system is capable of concatenating a plurality features into one feature vector. This can be used as input to a machine learning model that determines a maximum speed limit prediction, and/or a minimal offset from the nominal path prediction for autonomous vehicles. For example, the vehicle computing system can generate a feature vector of cat(autonomous_vehicle_features, context_features_region_1, context_features_region_2 . . . context_features_region_n) and input this feature vector into a machine-learned model to generate a speed limit prediction. As an example, the feature vector can comprise a concatenation of one or more autonomous_vehicle_features, such as posted speed limit, distance to stop sign, distance to yellow signal, distance to red signal, distance from front of vehicle to closet queue object in scene, speed of closest queue object toward nominal path, speed of closest queue object along nominal path, acceleration of closest queue object toward nominal path, acceleration of closest queue object along nominal path, and/or the like, as well as one or more context features for the plurality of regions/bins, such as average distance to pedestrians on left, average distance to pedestrians on right, speed of closest pedestrian on left, speed of closest pedestrian on right, distance to nominal path of closest pedestrian on left, distance to nominal path of closest pedestrian on right, count of pedestrians to left in bin 1, count of pedestrians to left in bin 2, count of pedestrians to left in bin 3, count of pedestrians to left in bin 4, count of pedestrians to right in bin 1, count of pedestrians to right in bin 2, count of pedestrians to right in bin 3, count of pedestrians to right in bin 4, and/or the like, as well as one or more context features for the plurality of regions/bins, such as average distance to other vehicles on left, average distance to other vehicles on right, speed of closest other vehicle on left, speed of closest other vehicle on right, distance to nominal path of closest other vehicle on left, distance to nominal path of closest other vehicle on right, count of other vehicles to left in bin 1, count of other vehicles to left in bin 2, count of other vehicles to left in bin 3, count of other vehicles to left in bin 4, count of other vehicles to right in bin 1, count of other vehicles to right in bin 2, count of other vehicles to right in bin 3, count of other vehicles to right in bin 4, minimum gap for objects in context region, maximum curvature along nominal path in context region, closest distance between road boundary and vehicle in context region on left, average distance between road boundary and vehicle in context region on left, closest distance between road boundary and vehicle in context region on right, average distance between road boundary and vehicle in context region on right, and/or the like. The feature vector comprising a concatenation of the plurality of autonomous_vehicle_features and context features can then be provided for use in determining a maximum speed limit prediction and/or a minimum offset from nominal path prediction for the autonomous vehicle.”

“In particular, the vehicle computing can calculate a maximum speed limit for an autonomous vehicle and/or a nominal offset for it based at most in part on its features. The vehicle computing system may include, use, or otherwise leverage a model such as a machine learning model. The machine-learned model, for instance, can include or be used to incorporate one or more models, such as neural networks (e.g. deep neural networks), and other multi-layer, non-linear models. Recurrent neural network can be made up of feed-forward neural networks and convolutional neural nets.

“For example, you can use labeled driving log data to train the model. To determine the maximum speed limit prediction, at least partially based on features associated with the context region and the current position of the autonomous vehicle. The vehicle computing system can input data indicative at least of the features (e.g. a feature vector) into a machine-learned model, and receive as an output data indicative of a maximum speed limit. Alternately, the vehicle computing can input data indicative at least of the features (e.g. a feature vector), into the machine-learned modeling and receive, in an output, data indicative a nominal path offset.”

“In some cases, a machine learned model can be used as a regression problem. The output of the machine-learned models is an exact speed limit. A machine-learned model could also be used to solve a classification problem. This would include a range (e.g. 0-25 mph) and a token. If the response falls outside the range, the output would be an exact speed limit. A speed limit range can be broken down into bins of 2.5 mph in width. Speed labels are used to assign the bins. Special labels are used when the labeler specifies that caution is not necessary (e.g., the context does not warrant the speed being reduced). If there is a chance of “no caution?” If the probability of?no caution? is greater than a threshold (e.g. indicating that there is no reason to slow down based on the scene), then the posted speed limit can be used. Alternately, the probability of “no caution?” is below the threshold. If the probability of?no caution? is lower than the threshold, then a probability spread can be calculated over the speed limit bins (e.g. the 2.5 mph increments). This will determine a maximum speed limit. The maximum speed limit can be determined by some implementations using the median of the probability distribution across the bins.

“More specifically, the machine-learned models can be trained with ground truth labels (e.g. speed labels based upon particular contexts/situations from one or more source so that the machine learned model can?learn. Depending on the situation, an autonomous vehicle can be driven at a speed that is suitable for it. Some implementations allow continuous labels to be added to the training data, rather than discrete ones, in order for machine-learned models to learn.

Driving logs can be used to generate model training data in certain implementations. Data can be captured, for example, when a vehicle changes its speed. This data can then be used to create driving sequences that can be used for labeling a machine-learned modeling. Another example is event data. This data can then be used to create driving sequences that can be used for training a machine learned model. Another example is manually driven vehicle data. This can be used to capture vehicle speeds, such as when the vehicle is driving below a posted speed limit, with/without other vehicles in front. The data can then be used for creating driving sequences that can be used for training a machine learning model. Another example is driving data that can be obtained from vehicles within a service fleet (e.g. a rideshare company), such as GPS and image data. These can then be analyzed and labeled in order to create training data sets. In some cases, driving data can also be used to generate training data, such as driving simulators that use a test track or simulate real-world scenarios that are more difficult to drive an autonomous vehicle.

“Also, some implementations could have ground truth label training data. This would be provided by a labeling team (e.g. human labelers), who review various events/situations from driving records and provide indications of a maximum speed and/or a target offset (from a nominal path), for each increment in time. Labelers may be given short clips of driving activity (e.g. video snippets and driving data) that they can review and give indications of the vehicle’s ability to travel at the speed the situation allows. Sometimes, human labelers may be able to ignore or filter out certain information from the snippet in order to make decisions, such as the speed of a slower vehicle in front, traffic lights, etc. Each time step in the snippet can be given a speed label by the human labeler. Sometimes, the human labeler may indicate that caution is not necessary (e.g. the autonomous vehicle can drive at a posted speed limit for a given time step). The human labeler can, for example, indicate the speed at which a passenger would be comfortable driving an autonomous vehicle in a given scenario when they review a snippet. Multiple labelers may be able to review each sequence clip and decide the appropriate speeds. This allows for the average of the different determinations when creating ground truth label data.

“In some cases, absolute value label extraction can be used to generate labeled training data. A labeler might review a brief log snippet (e.g. a one-minute snippet). This contains a speed context awareness scenario/situation. You can divide the log into smaller time intervals (e.g. time steps), and the labeler can provide speed labels for each time increment. As reference points, the labeler can provide the autonomous vehicle’s actual driving speed and any offset from nominal path in each region.

“In some cases, labeled training data can be generated with a feedback mechanism such as smart playback (e.g. video playback speed modification). A labeler might review a log snippet that includes a speed context awareness scenario/scenario, and then provide speed labels for increments in the snippet. The labeler can receive playback of the logsnippet at different playback speeds depending on speed labels to get feedback. This allows them to simulate driving at the labeled speed. The playback speed can be adjusted by the labeler to allow the driving events to proceed at the appropriate speed. Some situations are more complicated than others. Changing the playback speed can give the appearance that the vehicle is moving at an altered speed. If a front car is traveling at 10 mph and the labeler believes that 20 mph is possible, the playback rate could be increased to allow the labeler judge if the labeled speed is correct.

Referencing can be included in some instances when labeled training data is generated. A labeler might review a brief log snippet that includes a speed context awareness scenario. You can break down the log snippet into smaller segments and give the labeler a distinct label for each step. You could create a variety of categories, such as extremely cautious, moderately careful, light caution or no caution. A labeler for each category can be provided with reference videos showing situations in which a human driver drove through. These videos will highlight the type of situation and the appropriate speed associated with each category. A similar scheme of reference categories could also be used to determine nominal lane offset labels.

“Another example is that ground truth label training data can be obtained from some implementations based on operator data. An operator may modify the speed and/or in-lane shift settings of an autonomous vehicle while it is operating. Operators can, for example, set targets (e.g. speed target, offset goal) and then clear them when they are no longer needed. These overrides can be identified from driving data logs. For example, the operator can track override events and determine the start and end times of the override in driving data logs. These override situations can then be labeled and included in the training data sets to aid in training the machine-learned models. You can filter the override situations to reduce inappropriate/unnecessary use of overrides in some cases (e.g. an operator using modifications for reasons other that speed context awareness).

“Another example is that ground truth label training data can be obtained in certain implementations based on the labeling of human driving logs. In some cases, driving data logs from human drivers can be obtained from those who have been trained to drive well (e.g. as if the rider were in the vehicle). You can obtain data logs from human drivers and check for validity (e.g. driver operating the vehicle as expected). The logs can be labeled to identify areas where the human driver may be restricting vehicle speed. This is based on speed limit context awareness scenarios. To filter human driving data logs, for example, you can remove driving speeds that were reduced due to other context awareness scenarios. These labeled scenarios can be added to the training data sets for machine-learned models. Some implementations allow for labeling to include human driving logs that are generated using simulated driving scenarios such as using a test track or real street environments.

“Another example is ground truth label training data that could be generated from analysis of driver behavior in the environment around the autonomous vehicle. Sensors such as radar, lidar and image capture devices can capture data about the behavior of autonomous vehicles in specific situations. This data can be used to analyze it to find context awareness scenarios that can be added to training data.

“In some cases, the vehicle computing can set a speed limit or offset to the nominal vehicle path. In some cases, a machine-learned model (e.g. the autonomous vehicle features or the context features) can be used to determine the speed limit. Alternately, the machine learned model can also determine a target offset from an autonomous vehicle’s nominal path based at least partially on features (e.g. the autonomous vehicle features or the context features). The model output could indicate, for example, that the autonomous vehicle should travel at speeds below the posted speed limit based on its context. An example of this is a model output that indicates an autonomous vehicle can slow down (e.g. without being controlled by an operator) on certain situations, such as when there are many pedestrians or parked cars. Alternately, the model output may indicate that the autonomous vehicle should travel over a certain amount in the current travel path or change travel lanes based on its context.

“In particular, the machine learned model can output speed limit decisions in different scenarios and improve the safety driving behavior of an autonomous vehicle.” The vehicle computing system can, for example, determine whether a squeeze maneuver, an obstruction interaction, and/or context interaction might require the autonomous car to reduce its speed and/or implement an offset from the nominal vehicle’s path.

“A squeeze maneuver, for example, may require an autonomous vehicle to travel within a small area of free space created in the scene by other objects (e.g. pedestrians or other vehicles). The roadway and/or its properties. The vehicle computing system can determine that autonomous vehicle speed should be restricted as a function the region’s size (e.g. the squeeze gap size). The vehicle’s speed may also be affected by the types of objects (e.g. pedestrians versus vehicles) that form the border of the narrow area and their expected movements.

“Another example is an occlusion interaction, in which the autonomous vehicle must reduce speed or move over due to being near a space that is visible to the vehicle. For example, a bus stops at a crosswalk. As the autonomous vehicle approaches an occluded area, like a crosswalk, it may be necessary for the vehicle to reduce its speed and/or move over in order to avoid unseen objects such as pedestrians crossing the intersection. Another example is when an occlusion interaction occurs. For example, when the autonomous vehicle makes a right turn and the lane of travel is blocked, or when turning left at a stop sign, the vehicle will need to travel through an area that is blocked by traffic.

“Another example is when the context interaction requires the autonomous vehicle travel through a complex area, such as areas with many pedestrians traveling close to parked cars for a significant distance and the like. This is where an appropriate vehicle response can’t be determined by any individual actor, gap, or occlusion in the region.

“More specifically, in certain embodiments, the model may output a maximum speed limit value, which can then be used in motion planning by other components of vehicle computing system. The maximum speed limit value can be used in a cost function in a motion planning system to provide a modified speed limit that should be applied in the future (e.g. one second in future). You can control the autonomous vehicle’s operation (e.g. using one or more vehicle controls), so that it is within/below the speed limit at the future moment (e.g. in one second). Alternately, the vehicle computing can also predict the speed limit for one or several segments of the route based at most in part on the model output. One or more parameters can be used in some cases to limit the speed limit changes by the autonomous vehicle. These may include rules for limiting lateral jump, lateral deceleration and/or similar.

“In some cases, vehicle-to?vehicle communication may be used to increase the speed limit determination, such as by providing previews of future route segments. A first vehicle may provide information about the current route segment it is traveling to a routing software. The routing system can then provide information about that route segment for other vehicles approaching the segment. This information can be used to determine maximum speed limits and/or nominal paths offset values. In some cases, an autonomous vehicle receiving the information may use it to determine the appropriate speed limits for the segment or determine if a lane change would be appropriate for the segment.

The systems and methods described in this document can have a variety of technical benefits and effects. The vehicle computing system can detect the environment around the autonomous car and evaluate it. It can also adjust the speed and lane position of its autonomous vehicle according to the context. The vehicle computing system can perform such operations onboard the vehicle to avoid any latency issues when communicating with remote computing systems. As the vehicle moves along the nominal route, the vehicle computing system can perform iterative speed optimization. The vehicle computing system can be configured to proactively control the vehicle’s speed to minimize sudden changes and improve driving safety.

“The systems described herein may also result in improvements to vehicle computing technology that is responsible for autonomous vehicle operation. Aspects of the present disclosure, for example, can allow a vehicle computing system more efficiently and precisely control an autonomous vehicle’s movement by allowing smoother adjustment of travel speeds based upon analysis of context features along the nominal path. The systems and methods described herein are simpler and less expensive than other possible solutions. For example, it is not necessary to generate predictions for each object in a scene even though interactions with them have a low probability of happening.

“With reference to these figures, examples embodiments of this disclosure will be discussed further. FIG. FIG. 1 shows a block diagram for an example system 100 that controls the navigation of vehicle 102 in accordance with the examples of the present disclosure. The autonomous vehicle 102 can sense its surroundings and navigate with minimal to no human input. The autonomous vehicle 102 is a ground-based autonomous car (e.g., car or truck, bus etc.). The autonomous vehicle 102 can be a ground-based autonomous vehicle (e.g. car, truck, bus, etc.) or an air-based autonomous device (e.g. drone, helicopter, or any other aircraft), as well as other types of vehicles (e.g. watercraft). The autonomous vehicle 102 may be configured to operate in either a fully autonomous operational mode or a semi-autonomous operating mode. Fully autonomous (e.g. self-driving) can mean that the autonomous vehicle is capable of driving and navigation with little or no human interaction. Semi-autonomous (e.g. driver-assisted operational mode) can be one where the autonomous vehicle works with some interaction from a human driver.

“The autonomous vehicle (102) can have one or more sensors, 104, a vehicle computer system 106 and one or several vehicle controls 108. The vehicle computing system (106) can be used to control the autonomous vehicle 102. The vehicle computing system 106 can, in particular, receive sensor data from one or more sensors (104), attempt to understand the surrounding environment using various processing techniques, and then generate a suitable motion path through that environment. The vehicle computing system (106) can control one or more vehicle controls, 108 to control the autonomous vehicle 102 in accordance with the motion path.

“The vehicle computing system (106) can contain one or more processors 130, and at most one memory 132. Any suitable processing device can be used as the one or more processors 130 (e.g., processor core, microprocessor, ASIC, FPGA, controller, microcontroller, etc.). It can contain one processor, or multiple processors that are operatively linked. Memory 132 may contain one or more nontransitory computer-readable storage medias. These mediums include RAM, ROM and EEPROM, EPROM flash memory devices, magnetic discs, and combinations thereof. The memory 132 may store data 134, instructions 136 that are executed by processor 130 to enable vehicle computing system (106) to perform operations. One or more processors 130, at least one memory 132, and one other memory 132 could be included in some implementations. For example, computing device(s), 129 within vehicle computing system 106.

“In some cases, the vehicle computing system (106) can be further connected to or included in a positioning system 120. The current geographical location of an autonomous vehicle can be determined by positioning system 120. Any device or circuitry that analyzes the position of an autonomous vehicle 120 is considered the positioning system 120. The positioning system 120, for example, can use a satellite navigation system (e.g. A GPS system, a Galileo navigation satellite system (GLONASS), or the BeiDou Satellite Navigation & Positioning system), a GPS system, a Galileo position system, a Galileo positioning systems, the GLObal Navigation Satellite system (GLONASS), a BeiDou Satellite Navigation & Positioning system), an inertial navig system, and a dead reckoning method, based on IP addresses, triangulation, proximity to cellular towers and WiFi hotspotspots and/or other techniques of cellular towers and/Wi/or cellular towerspotspotspotspotspotspotspotspotspotspotspotspotspotspotspots and/or other methods for determining the vehicle’spotspotspotspotspotspotspotspotspotspotspotspotspotspots/or other suitable techniques to determine the location. The vehicle computing system 106 can use the position of the autonomous car 102.

“As illustrated at FIG. “As illustrated in FIG. In some implementations, the vehicle computing system 106 can also include a feature extractor/concatenator 122 and a speed limit context awareness machine-learned model 124 that can be provide data to assist in determining the motion plan for controlling the motion of the autonomous vehicle 102.”

“In some cases, the perception system 110 may receive sensor data from one or more sensors (104), that are connected to or otherwise integrated within the autonomous vehicle (102). One or more sensors 104 could include a Light Detection and Ranging system (LIDAR), a Radio Detection and Ranging system (RADAR), one or more cameras (e.g. visible spectrum cameras, infrared camera, etc.). Other sensors, and/or light detection systems. Sensor data may include information about the location of objects in the environment surrounding the autonomous vehicle 102.

LIDAR systems can, for example, include sensor data that includes the location (e.g. in three-dimensional space relative the LIDAR, or the number of points that correspond with objects that have reflected the ranging laser. LIDAR systems can be used to measure distances. This is done by measuring the Time of Flight (TOF), which measures how long it takes for a laser pulse to travel between the sensor and an object. The distance from the speed of light can then be calculated.

“Another example is for RADAR systems, the sensor data may include the location (e.g. in three-dimensional space relative RADAR network) of a certain number of points that correspond with objects that reflect a ranging radiowave. Radio waves, whether continuous or pulsed, can be transmitted by RADAR systems and reflect off objects. This information gives information about their speed and location. The RADAR system can give useful information about an object’s speed.

“Another example is that for one or more of the cameras, different processing techniques (e.g. range imaging techniques such a, for example structure from motion structured light stereo triangulation and/or other methods) can be used to identify the location (e.g. in three-dimensional space relative the one or two cameras) of a certain number of points that correspond with objects in imagery captured by one or more of the cameras. Other sensors can also identify the locations of points that correspond with objects.

“The one or more sensors (104) can be used for collecting sensor data that includes information about the location (e.g. in three-dimensional space relative the autonomous vehicle 102) and the objects that correspond to them within the environment of the autonomous car 102.”

“In addition to sensor data, the perception 110 can also retrieve or otherwise obtain map data (118) that provides detailed information about surrounding environments of the autonomous vehicle.102. The map data 118 may include information about the location and identity of various travelways (e.g. roadways), segments, buildings or other objects (e.g. lampposts or crosswalks). ); directions and location of traffic lanes (e.g. the direction and location of a parking lanes, turning lane or bicycle lane or any other lanes within a specific roadway or travelway); traffic control data. (e.g. the location and instructions for signage, traffic lights or other traffic control devices); or any other map data that aids the vehicle computing system (106) in understanding and perceiving the surrounding environment and its relationship to it.

“The perception system 110 is able to identify one or more objects that are close to the autonomous vehicle102 using sensor data from one or more sensors (104 or 118) and/or map data 118. The perception system 110 can, in certain implementations, determine the object’s current state using sensor data from one or more sensors 104 and/or map data 118. The state data of each object can be used to describe the object’s current location (also known as position); current speed (also known as heading); current acceleration (also known as velocity); current heading; current orientation; size/footprint (e.g. as represented by a bounding form such as a bounding pologon or polyhedron); type (e.g. vehicle versus pedestrian; bicycle versus other); yawrate; and/or any other state information.

“In certain implementations, the perception 110 can determine the state data for each object through a series of iterations. The perception system 110, in particular, can update each object’s state data at every iteration. The perception system 110 can track objects, such as pedestrians, bikes, and vehicles, that are within a short distance of the autonomous vehicle.

“The prediction system 112 can read the state data from perception system 110 to predict one or more future locations of each object. The prediction system 112 can, for example, predict where an object will be within the next five seconds, 10 seconds or 20 seconds. An object can be predicted to follow its current trajectory based on its speed. Another example is modeling or other sophisticated prediction techniques.

“The motion planning software 114 can create a motion plan for an autonomous vehicle 102 using at least part of the predicted future location for the object (provided by the prediction system 112 or the state data provided by the perception 110). The motion planning system 114, which has information about current and predicted future locations for objects, can create a motion plan to guide the autonomous vehicle (102) relative to such objects.

“As an example, the motion planning software 114 can calculate a cost function for each candidate motion plan for the autonomous vehicle 101 based at minimum in part on the locations and/or predicted future positions of the objects. The cost function, for example, can be used to describe the cost of adhering (e.g. over time) to a candidate motion plan. The cost function could describe an increase in cost if the autonomous vehicle 102 is close to another object or deviates from a preferred route (e.g., preapproved path).

“The motion planning system can calculate the cost of adhering a candidate pathway based on information such as the locations of current and/or anticipated future objects. The cost function(s) can be used by the motion planning system to select or determine the motion plan for the autonomous car 102. The candidate motion plan that minimizes cost functions can be chosen or determined in other ways, for example. The motion planning system (114) can transmit the selected motion plan to a vehicle control 116, which controls one or more vehicle controls. 108 (e.g. actuators or other devices that regulate gas flow, acceleration and steering, etc.). To execute the selected motion plan.

“In some implementations, the vehicle computing system 106 can include a feature extractor/concatenator 122. The feature extractor/concatenator 122 can extract features regarding the autonomous vehicle state and the surrounding environment of the autonomous vehicle for use in enabling speed limit context awareness in the motion planning. The feature extractor/concatenator 122 can receive feature data (e.g., features relative to objects in a context region around the nominal path and/or features that are relative to the vehicle current position), for example, from the perception system 110, the prediction system 112, and/or the motion planning system 114, based at least in part on the object state data, map data, and/or the like. The feature extractor/concatenator 122 can divide a portion of the nominal path of an autonomous vehicle into a plurality of regions (e.g., n bins of x length) and compute statistics and features (e.g., associated with pedestrians, vehicles, road boundaries, etc.) Each region/bin is shown. Additionally, the feature extractor/concatenator 122 can determine features associated with the autonomous vehicle position/state, which may appear a single time within a current scene and not be divided among the bins. The feature extractor/concatenator 122 can concatenate the plurality of feature data into a feature vector for use as input to a machine-learned model.”

“In certain implementations, the vehicle computing systems 106 may include a speed limit context-learned model (124). The context awareness machine-learned model 124 can provide speed limit context awareness predictions, based on features regarding the autonomous vehicle state and the surrounding environment of the autonomous vehicle, that can be provided to the motion planning system 114 for use in determining/adjusting a motion plan for the autonomous vehicle 102. For example, the context awareness machine-learned model 124 can receive a feature vector as input, for example, from the feature extractor/concatenator 122. The context awareness machine-learned 124 can determine a maximum speed limit that will be applied to the autonomous vehicle (102) at a later time while it is traveling along the nominal path. The context awareness machine-learned 124 can also predict the speed limit that will be applied to each segment of the route ahead of the autonomous car 102. The context awareness machine-learned 124 can also predict a target offset from a nominal path.

“In some implementations, the feature extractor/concatenator 122 and/or the context awareness machine-learned model 124 may be included as part of the motion planning system 114 or another system within the vehicle computing system 106.”

“Each of the perception system 110, the prediction system 112, the motion planning system 114, the vehicle controller 116, the feature extractor/concatenator 122, and the speed limit context awareness machine-learned model 124 can include computer logic utilized to provide desired functionality. In some implementations, each of the perception system 110, the prediction system 112, the motion planning system 114, the vehicle controller 116, the feature extractor/concatenator 122, and the speed limit context awareness machine-learned model 124 can be implemented in hardware, firmware, and/or software controlling a general purpose processor. For example, in some implementations, each of the perception system 110, the prediction system 112, the motion planning system 114, the vehicle controller 116, the feature extractor/concatenator 122, and the speed limit context awareness machine-learned model 124 includes program files stored on a storage device, loaded into a memory, and executed by one or more processors. In other implementations, each of the perception system 110, the prediction system 112, the motion planning system 114, the vehicle controller 116, the feature extractor/concatenator 122, and the speed limit context awareness machine-learned model 124 includes one or more sets of computer-executable instructions that are stored in a tangible computer-readable storage medium such as RAM hard disk or optical or magnetic media.”

“FIG. “FIG.

“An autonomous vehicle may include one or more computing devices as well as various subsystems, which can work together to sense the environment and create a motion plan that will control the vehicle’s motion. FIG. FIG. 2 shows that an autonomous vehicle environment 200 can include a number of objects and/or roadway elements along a nominal route 204 of an autonomous car 202. The autonomous vehicle may be able to detect and/or classify multiple objects along/around the nominal route 204. This could include queue objects (e.g. queue vehicle 206 in front of the autonomous vehicle 202), other vehicles (e.g. stationary vehicles 208) and/or traffic control devices (e.g. stop sign 210). The present disclosure provides systems and methods that can gather information about the context surrounding the autonomous vehicle. This includes a variety of features associated with the autonomous car context. These features may include information regarding aggregate information about objects within a region around the nominal vehicle (e.g. pedestrians, vehicles and/or path boundaries) and/or features relative to the vehicle’s current position (e.g. posted speed limit, distances from traffic control devices, distances between queued objects and/or other objects and/or so on).

“In some implementations, an autonomous vehicle surround environment 200 (e.g. a nominal path for autonomous vehicles and a radius around the vehicle and the path), can be broken down into segments or bins (e.g. n bins of length). FIG. FIG. 2 shows how the environment 200 can be broken down into context bins with a defined length. For example, context 221, context 222, and context 223, respectively. For example, context 221, context 223 and context 223 can be used for information about features such as objects (e.g. pedestrians, vehicles and/or thelike), path properties (e.g. nominal path geometrical property), road boundaries (e.g. distances to road/lane borders, etc. ”

“In particular, the autonomous car 202 (e.g. the vehicle computing system), can identify features within each bin and compute statistics as well as features inside each bin. FIG. 2 illustrates an example. 2. The autonomous vehicle 202 is able to divide the nominal path into multiple bins. The autonomous vehicle 202 can recognize that context 221 includes two stationary vehicles (208). The autonomous vehicle can identify that context 222 includes a queued vehicle (206), two stationary vehicles (208), and four pedestrians (212). The autonomous vehicle 202 can also determine that certain objects are not within the radius of the vehicle 202 or the path 204. This could include pedestrians 214 and two stationary vehicles 208.

“The autonomous vehicle202 can compute statistics and features for these objects, as well as roadway properties, such as distance to pedestrians on each side, speed of nearest pedestrian on each side, distance from nominal path for the closest vehicle on each side, distance between vehicles on each side, speed of closest vehicle on each side, distance between autonomous vehicles in the context region and distance to each other, maximum curvature of nominal path in the context region, closest distance in relation to road boundary and autonomous vehicle in the context region and average to the right, distance between the right and the vehicle in the context region and the right, and the vehicle in the autom and the vehicle in the context region and the right, the autonomous vehicles in the region and the right and the vehicle in the context region and the right, the road border to the right and the vehicle in the context region and the right and the vehicle in the context and average to each other and the right, the road region and the vehicle in the right and the vehicle in the region and the road to the road to the road and the road to the right

“Additionally, an autonomous vehicle 202 may determine autonomous vehicle features that may be occurring once within the scene, such as a posted limit, distance to traffic controller device, distance between the nose of the vehicle and a nearest queue object along the nominal route, speed of closest queue objects, acceleration of closest queue objects, and/or other similar information.” The autonomous vehicle 202 can determine autonomous vehicle features relative the queued vehicle (206) and/or stop sign (210) within the autonomous car environment 200.

“FIG. 3A-3C, FIG. 4A-4B and FIG. FIG. 4A-4B and FIG. 5 show some examples of situations where the autonomous car may not be able to operate at a posted speed limit on a nominal route. The context around the autonomous vehicles may make it difficult or impossible for the vehicle to operate at that speed limit. Accordingly, the systems and methods described in the present disclosure could determine a speed limit below the posted speed limit.

“An autonomous vehicle might need to travel through a narrow area of the nominal route (e.g., a squeeze maneuver) because of objects or properties in the nominal path. It may therefore be desirable that the vehicle travels through this narrow area at a slower speed. FIG. FIG. 3A through FIG.

FIG. FIG. 3A shows that an autonomous vehicle 302 must travel in a narrow area of free space. This is because the vehicle may need to pass through gap region 312, which can be created by moving objects 304 (e.g. a moving vehicle) or one or more stationary vehicles 306 The autonomous vehicle 302 can, via a vehicle computing device, determine that the autonomous vehicle’s speed should be restricted as a function the size of the gap region 312. The autonomous vehicle’s speed may also be restricted based on the types of objects (e.g. moving vehicle 304, parked vehicles 306) that form the boundary of the gap area 312 and their expected movements.

“As illustrated at FIG. FIG. 3B in context scenario300B shows that the nominal path for an autonomous vehicle 302 might require it to travel through a narrow area of free space. For example, the gap region 314, which is created by a moving vehicle, 304, and a road border 308 (e.g. a roadway curb, etc.). The autonomous vehicle 302 can, e.g. via a vehicle computing software, determine that the autonomous car speed should be restricted as a function both of the size of the gap region 314, and the type of objects (e.g. vehicle 304 and boundary 308.

“As illustrated at FIG. FIG. 3C shows that an autonomous vehicle 302 must travel in a narrow area of free space. In this case, it may be required to travel through the gap region 316. This is created by a moving vehicle (304) and pedestrians (310) near the road border 308. An autonomous vehicle 302 can, e.g. via a vehicle computing device, determine that the autonomous vehicle speed should not exceed the gap region 316’s dimensions as well as the type (e.g. vehicle 304 and pedestrians 302, forming the gap area 316. In scenario 300C, for example, since the gap region 316 is bound by a plurality pedestrians 310 the autonomous vehicle 302 may determine that the vehicle’s speed should be decreased in scenario 300C more than in scenarios 300A or 300B, where the gaps region boundaries are other vehicles/roadway boundaries.

“Another example is that some areas of a nominal route may have one or more obstructions (e.g. large vehicles, buildings signs, large vehicles, parked vehicles, etc.). These obstructions can limit visibility in the area and it might be advantageous for an autonomous vehicle to travel at a slower speed through those areas. FIG. FIG. 4A and FIG.

FIG. “As illustrated in FIG. An example is when an autonomous vehicle 402 approaches traffic control devices, such stop sign 406, where the nominal path 404 requires a left turn and an occluded area 412 may occur due to other vehicles, such stop sign 406. FIG. FIG. 4A illustrates that the occluded area 412 is between vehicle 410 and stopped vehicle 408 along the nominal route 404. This occurs because vehicle 406 has been stopped at stop sign 406, reducing visibility for autonomous car 402 following the left turn. This may lead to autonomous vehicle 402 traveling slower.

“Additionally as illustrated in FIG. “Additionally, as illustrated in FIG. An example is that autonomous vehicle 402 might be near an intersection, crosswalk or other similar location on a nominal route 414. It may also have to pass large vehicles such as bus 416. The autonomous vehicle could have difficulty seeing potential objects, such as pedestrians, because it is located at a bus stop where passengers are unloading. One or more pedestrians could be entering the crosswalk from the occluded area 418. This may indicate that the autonomous vehicle’s travel speed is reduced to avoid an interaction with an unseen object (e.g. a pedestrian).

“FIG. “FIG. FIG. FIG. 5 is a context scenario 500. A nominal path for an autonomous vehicle 502 might require it to travel through complex areas, such as busy streets with narrow travel lanes, pedestrians 508, and other moving vehicles 504 or the like. FIG. FIG. 5 illustrates how an autonomous vehicle 502 could travel a nominal route 503 between another vehicle 504 or a variety of parked vehicles 506. The parked vehicles 506 may cause occluded areas, such as the occluded areas 510, 514 and 512 to occur along the nominal route 503. Due to the occluded areas, pedestrians 508 may be travelling near or on the nominal path 503 but may not be visible by the autonomous vehicle 502. A pedestrian 508 might be moving between parked cars 506 in the occluded area 512 to reach the nominal path 503. This could allow the pedestrian to return to their vehicle. However, the pedestrian may not be seen due to the obstruction.

“FIG. “FIG. One or more of the 600 operations can be executed by one or more computing devices, such as the FIG. 106 vehicle computing system. 1. The computing system 1102 (FIG. 11. The computing system 1130 in FIG. 11, or something similar. Furthermore, one or more of the 600 operations can be implemented as an algorithm on any of the hardware components of a device (e.g. as shown in FIGS. To, for instance, provide speed limit context awareness while autonomous vehicle operation, 1 and 11 are available.

“At 602, one of the computing devices within a computing device can obtain a plurality feature for a scene along a nominal path of an autonomic vehicle. A computing system, such as an autonomous vehicle computing systems, can gather information (e.g. sensor and/or data) about the environment around the autonomous car and determine a plurality features that are associated with that context. The computing system can, for example, obtain aggregate information about objects within a region around the nominal vehicle path (e.g. pedestrians, vehicles and path boundaries) and/or features relative to the vehicle’s current position (e.g. posted speed limit, distances from traffic control devices, distances between queued objects and/or other objects and/or similar).

“At 604, the computing device can determine a context reaction for the autonomous car based on a plurality of features (e.g. context features and autonomous vehicles features in a given scene). The context response may include at least a derived speed limit for the autonomous vehicle in some cases. In some cases, the context response can include a predicted maximum speed limit of the autonomous car. This is based at minimum in part on a feature vector, which is a combination of the context features and the autonomous vehicle features. Alternately, the machine learning model could predict a target offset from an autonomous vehicle’s nominal path. This information would be included in the context response. It will be based at most in part on a feature matrix. The context response could indicate, for instance, that the autonomous car should travel at a speed lower than the posted speed limit based on its context. It could also indicate that the autonomous vehicles should adjust their lane position or provide a target offset from what will be applied in the future.

“At 606, a computing system can provide the context reaction, which could include, for instance, a derived limit speed constraint (e.g. a maximum speed constraint for an autonomous vehicle), for use in determining the motion plan for the vehicle. For example, the motion planning system 112 may be used. An autonomous vehicle motion plan can, for example, slow down the vehicle or adjust the autonomous vehicle’s lane position independently based on context response data. This is possible in certain situations, such as when there are many pedestrians and/or cars parked on the streets.

“FIG. 7 shows a flowchart diagram for example operations 700 that provide speed limit context awareness during the operation of an autonomous car according to examples of the present disclosure. One or more of the operations 700 may be executed by one or more computing devices, such as the FIG. 106 vehicle computing system. 1. The computing system 1102 (FIG. 11. The computing system 1130 in FIG. 11, or something similar. Furthermore, one or more of the operations 700 may be implemented as an algorithm on any of the hardware components of a device described herein (e.g. as in FIGS. To, for instance, provide speed limit context awareness while autonomous vehicle operation, 1 and 11 are available.

“At 702, one of more computing devices within a computing systems can obtain a part of a nominal track of an autonomous vehicle. For example, the nominal path within the current scene of the autonomous car, such as the nominal path 204 illustrated at FIG. 2.”

“At 704, the computing device can divide the nominal path into multiple bins or segments. The surrounding environment can be broken down into segments or bins. The nominal path for an autonomous vehicle can be broken down into multiple segments of different lengths, such as 10m, 15m, etc. Each segment is a context area for speed limit awareness.

“At 706, each bin/segment can be used to compute context features. Each segment or bin can be used for information about objects, such as pedestrians and vehicles. ), path properties (e.g. nominal path geometrical property), road boundaries (e.g. distances to road/lane borders, etc. ) and/or similar. The computing system can calculate aggregate statistics and features of each bin. The computing system can determine the closest pedestrian and closest autonomous vehicle within a given area (e.g. inside a bin). The computing system can also determine autonomous vehicle features that are associated with the current location of the autonomous car. These features are not limited to one bin.

“At 708, the computing device can combine a plurality of features (e.g., context features and autonomous vehicle features) into a feature vector that can be used as input to a machine learning model. The computing system can, for example, combine a plurality of features (e.g. context features and autonomous car features) into one feature vector that can be used as input to a machine learning model. This will provide autonomous vehicle speed limit context awareness. For example, the computing system can generate a feature vector of cat(autonomous_vehicle_features, context_features_region_1, context_features_region_2 . . . context_features_region_n). This feature vector can be used as an input to a machine-learned modeling.

“FIG. 8A shows a flowchart diagram for example operations 800A to provide speed limit context awareness during autonomous vehicle operation according to the example embodiments in this disclosure. One or more of the operations 800A may be executed by one or more computing devices, such as the vehicle computing system (106) of FIG. 1. The computing system 1102 (FIG. 11. The computing system 1130 in FIG. 11, or something similar. Furthermore, one or more of the operations 800A may be implemented as an algorithm on any of the hardware components of a device (e.g. as shown in FIGS. To, for instance, provide speed limit context awareness while autonomous vehicle operation, 1 and 11 are available.

“At 802, one of the computing devices within a computing device can obtain a plurality feature for a scene along a nominal path of an autonomic vehicle. A computing system, such as an autonomous vehicle computing systems, can gather information (e.g. sensor and/or data) about the environment around the autonomous car and determine a plurality features that are associated with the autonomous vehicles context. The computing system can, for example, obtain aggregate information about objects within a region around the vehicle’s nominal path (e.g. pedestrians, vehicles and path boundaries) and/or features relative to the vehicle’s current position (e.g. posted speed limit, distances from traffic control devices, distances between queued objects and/or other objects).

“At 804, the computing device can generate a feature-vector based on a plurality of features. The computing system can, for example, combine a plurality of features (e.g. context features and autonomous car features) into one feature vector that is input to a machine learning model. This will provide context awareness to the vehicle’s speed limit. For example, the computing system can generate a feature vector of cat(autonomous_vehicle_features, context_features_region_1, context_features_region_2 . . . context_features_region_n).”

“At 806, a computing system can provide a feature vector as input for a trained machine learned model (e.g., one that has been trained for driving speed predictions for areas of the vehicle’s nominal route based at least partially on the obtained features). This data will be used to generate machine-learned models output data to provide speed limit context awareness. A machine-learned machine model in which a feature vector is input at 806 could correspond, for instance, to FIG. 124. 1. Machine-learned model 1110 (FIG. 11, and/or machine learned model 1140 of FIG. 11.”

“At 808, a computing system can receive maximum speeds limit data (e.g. a predication about a maximum speed limit of the autonomous vehicle) in the output of the machine learned model. In some cases, a machine learned model can calculate a speed limit for an autonomous vehicle using at least part of the feature vector (e.g. the features of the autonomous vehicle and the context features). The model output could indicate, for instance, that the autonomous vehicle should travel at least the posted speed limit based on the context. It can also provide a maximum speed limit (e.g. driving speed constraint) that will be applied in the future.

“At 810 the computing system can provide maximum speed limit data to be used in determining an autonomous vehicle’s motion plan, for example by the motion planning software 114. An autonomous vehicle motion plan can be based on model output and could slow the vehicle in certain situations, such as when there are many pedestrians or parked cars.

“FIG. 8B shows a flowchart diagram for example operations 800B to provide speed limit context awareness during the operation of an autonomous car according to examples of the present disclosure. One or more of the operations 800B may be executed by one or more computing devices, such as the vehicle computing system (106) of FIG. 1. The computing system 1102 (FIG. 11. The computing system 1130 in FIG. 11, or something similar. Furthermore, one or more of the operations 800B may be implemented as an algorithm on any of the hardware components of a device (e.g. as shown in FIGS. To, for instance, provide speed limit context awareness while autonomous vehicle operation, 1 and 11 are available.

“At 822 one or more computing devices can get a plurality feature for a scene along a nominal path of an autonome vehicle. A computing system, such as an autonomous vehicle computing systems, can gather information (such as sensor data and/or map data) about the environment around the autonomous car and determine a plurality features that are associated with that context. The computing system can, for example, obtain aggregate information about objects within a region around the vehicle’s nominal path (e.g. pedestrians, vehicles and path boundaries) and/or features relative to the vehicle’s current position (e.g. posted speed limit, distances from traffic control devices, distances between queued objects and/or other objects).

Summary for “Systems and Methods for Speed Limit Context Awareness”

An autonomous vehicle is one that can sense its surroundings and navigate without any human input. An autonomous vehicle is able to observe its environment with a variety sensors and can try to understand the environment by applying different processing techniques to the data. The autonomous vehicle can determine the best path to follow in such an environment based on its knowledge.

“Aspects, and benefits of embodiments of this disclosure will be described in part.

“One aspect of the present disclosure is directed at a computer-implemented way to apply speed limit context awareness in autonomous car operation. A computing system that includes one or more computing devices is used to obtain a variety of features that describe a context and a vehicle’s state. The computing system determines a context response for an autonomous vehicle, which is based at most in part on the machine-learned model, the plurality features, and wherein the context response also includes a derived speed constraint. The computing system provides the context response to the autonomous vehicle’s motion planning application to help determine the vehicle’s motion plan.

“Another example aspect is directed at an autonomous vehicle. An autonomous vehicle is equipped with a machine-learned model, which has been trained to determine the context response using features that are associated with a context or a state. A vehicle computing system, which includes one or several processors; one or multiple memories, and instructions that cause one or many processors to execute operations, is also part of the autonomous vehicle. These operations include the acquisition of a variety of features that describe the context and the current state of the autonomous car. Further, the operations include creating a feature vector that is at least partially based on the plurality features. Further, the operations include submitting the feature vector to a machine-learned modeling. Further operations include the extraction of a context response from the machine-learned modeling. The context response includes a derived velocity constraint for autonomous vehicles. Further, the operations include providing context responses to motion planning applications of the autonomous car to determine a motion program for the vehicle.

“Another example aspect is directed at a computing system. The computing system comprises one or several processors and one (or more) memories. These instructions, when executed by one or multiple processors, cause one or both processors to perform operations. These operations include the acquisition of multiple features that describe a context and a vehicle’s state. Further, the operations include the determination of a context response for an autonomous vehicle based at minimum in part on a machine learned model and the plurality features. The context response also includes a derived speed constraint. Further, the operations include providing context responses to motion planning applications of the autonomous car to determine a motion plan.

“Another aspect of the present disclosure is directed to various systems and apparatuses, nontransitory computer-readable mediums, user interfaces, electronic devices, and other systems.”

These and other features, aspects and benefits of different embodiments of this disclosure will be better understood if you refer to the following description. These accompanying drawings are included in and form a part this specification and illustrate examples of the present disclosure. They also serve to explain related principles.

“Reference will now be made in detail at embodiments, one (or more) of which is illustrated in the drawings. Each example is given to explain the embodiments and not limit the scope of the present disclosure. It is obvious to anyone skilled in the arts that the embodiments can be modified and adapted without departing from its scope. One embodiment may include features described or illustrated in another. This allows for the creation of a further embodiment. These modifications and variations are covered by the present disclosure.

“Generally, the present disclosure relates to systems and methods that use and/or leverage machine learning models to provide speed limit context awareness for determining and controlling autonomous vehicle travel speeds. The systems and methods described herein can be used to predict the maximum speed limit for an autonomous vehicle on a segment of its nominal path. This is based on the environment and context surrounding it. An autonomous vehicle computing system, for example, can gather information about the environment around the vehicle. An autonomous vehicle computing system can identify a variety of features that are associated with the context around the vehicle. These features could include information about aggregated information about objects within a region around the nominal route (e.g. pedestrians, vehicles and path boundaries) and/or features relative to the current vehicle position (e.g. posted speed limit, distances from traffic control devices, distances between queued objects and/or other objects and/or/or the similar). The autonomous vehicle computing system may include a machine learning model that can be trained to predict driving speeds for specific regions along the vehicle’s nominal route based at least partially on the features. The features can be inputted to the machine learning model (e.g. as a feature vector). This can then be used to analyze the machine-learned models to determine a maximum speed limit that will be applied to the autonomous vehicle in the future. Another example is that the machine-learned model could determine a speed limit for each segment of the route ahead of the autonomous car based on the features. Alternately, or in addition, the machine-learned models could offer a target offset from a nominal path based upon the obtained features. This could be used to optimize the positioning and alignment of the autonomous car on a roadway based around its surroundings. The autonomous vehicle can then use context awareness to limit the speed of travel and/or bias the lane position, thereby achieving safer driving behavior.

An autonomous vehicle might be required to travel a nominal path, in which case it may not be possible or desirable for the vehicle to operate at a posted speed limit. This is due to various conditions along the path segments. For example, a human driver with awareness of the environment may drive at the posted speed limit. An autonomous vehicle might have to travel in narrow areas of the nominal path because of objects (e.g. pedestrians, vehicles) within that area. property of the path (e.g. road borders, etc.) It may be possible for an autonomous vehicle to travel in the narrow area at a speed that is lower than the posted speed. Another example is that some areas of a nominal route may have one or more obstructions, such as large vehicles, buildings, signs, and parked vehicles. These obstructions may limit visibility in the area and it may be necessary for an autonomous vehicle to travel at a slower speed. Another example is when an autonomous vehicle must travel along a nominal path with a complex environment. This could include a busy street that has narrow travel lanes, many parked vehicles, pedestrians and bicyclists.

According to some embodiments, the vehicle computing system of an autonomous car can determine a maximum speed limit and/or offset from the nominal route for a segment of a path segment, at least partially based on the environment around it. One example is that a vehicle computing system may include one or more machine learning models that can receive a variety of features about the vehicle. These features can then be used to predict a maximum speed limit and/or a nominal offset for the segment. This can be applied by an autonomous vehicle at a specific moment, such as one second in the distant. The features may include information about pedestrians on a path segment and parked and/or moving cars in that segment. They can also describe the shape of the paths (e.g., road borders), the distance to next traffic control device, crosswalk, speed of preceding vehicles, and/or other such details. The machine-learned modeling can be used to analyze a variety of features and predict a maximum speed limit that the autonomous vehicle can apply at some point in future. This could include a speed limit to be applied one second in advance, speed limits to be applied for one, or more, segments of the path, etc. Alternately, some embodiments can output a target offset from a nominal path that will be applied at some future point (e.g. to be applied one second into the future, to apply at a determined distance ahead and/or similar) depending on the features.

An autonomous vehicle can be either a ground-based or air-based vehicle. There may also be a number of systems that control the operation of the vehicle. The autonomous vehicle may include one or more data capture systems (e.g. sensors, image capture devices), or one or more vehicle computing system (e.g. One or more vehicle control system (e.g. for controlling acceleration, steering, and braking) can be included in an autonomous vehicle. ), and/or similar. Data acquisition systems can capture sensor data (e.g. lidar data radar data image data, etc.). associating with one or more objects (e.g. pedestrians, vehicles etc.). That are located near the autonomous vehicle, and/or the sensor data associated to the vehicle’s path (e.g. path shape, boundaries or markings). Sensor data may include information about the location (e.g. in three-dimensional space relative the autonomous car) of points that correspond with objects within the autonomous vehicle’s surrounding environment (e.g. at one or more occasions). These sensor data can be provided to the vehicle computing systems by the data acquisition system.

“The vehicle computing system can also obtain map data which provides additional information about the environment around the autonomous vehicle. The map data may include information such as the identity and location roads, segments, buildings, and other items, and the direction and location of traffic lanes (e.g. The boundaries, location, direction, and so on. of a travel lanes, parking lane or turning lane and/or any other lanes within that travel way); traffic control data (e.g. the location and instructions for signage, traffic signals and/or traffic control devices); and/or any other data that may be useful in aiding the autonomous vehicle to comprehend and perceive its surroundings and their relationship.

The vehicle computing system may include one or more computing devices, as well as various subsystems, that can work together to sense the environment and create a motion plan to control the vehicle’s motion. The vehicle computing system may include a vision system, a prediction and motion planning systems. The vehicle computing system can process sensor data and generate a motion plan through the vehicle’s environment.

Based on sensor data, the perception system can identify one or more objects that are close to the autonomous vehicle. The perception system can, in certain implementations, determine the object’s state data. This describes its current state. The state data can be used to describe the object’s current location (also known as position); speed/velocity, acceleration, heading, size/footprint, and class (e.g. vehicle class versus pedestrian or bicycle class). Other state information. The perception system may be able to determine the state of each object in some cases over several iterations. The perception system can also update each object’s state data at every iteration. The perception system can track and detect objects, such as vehicles, pedestrians, or bicycles. The perception system can detect and track objects (e.g. vehicles, bicycles, pedestrians, etc.) that are located near the autonomous vehicle. This allows the system to present the world around the vehicle as well as its current state.

“The state data can be received from the perception system by the prediction system. Based on this data, the system can predict one or more future locations of each object. The prediction system can, for example, predict where an object will be within 5 seconds, 10 second, 20 second, and so on. An object can be predicted to follow its current trajectory based on its speed. Another example is modeling or other sophisticated prediction techniques.

“The motion planning system can create a motion plan for an autonomous vehicle using at least part of predicted future locations and/or state data provided by the perception system. The motion planning system can calculate a motion plan for an autonomous vehicle based on information about current and/or predicted future locations relative to objects.

“Although this is an example, in some implementations the motion planning system can calculate a cost function to each candidate motion plan for the autonomous car based at minimum in part on the locations and/or predicted future positions of the objects. The cost function, for example, can be used to describe the cost of following a candidate motion plan over time. The cost function could describe how the autonomous vehicle impacts with other objects and/or departs from a preferred route (e.g., predetermined travel routes).

“The motion planning system can calculate the cost of following a specific candidate pathway based on information such as the locations and/or future locations of objects. The cost function(s) can be used by the motion planning system to select or create a motion plan for an autonomous vehicle. The motion plan that minimizes cost functions can be chosen or determined in other ways, for example. The motion planning system can then provide the selected motion plan for a vehicle controller (e.g. actuators or other devices controlling acceleration, steering, and braking). To execute the selected motion plan.”

“More specifically, in some implementations the perception system or prediction system and/or motion plan system can determine one/more features associated with objects or the roadway in the environment surrounding the autonomous vehicle. This is based at minimum in part on state data. The features can be determined by the motion planning system, prediction system and/or perception system in some cases. These features may be indicative of the environment around the autonomous car along its nominal path and/or current state.

“For example, some implementations can determine the features from aggregate data about the vehicle’s position in relation to the environment and the relationship between objects in the environment and the nominal path. There are two types of features that can be used in some implementations: context features and autonomous vehicle features. Features that are only present once in a scene, and which are not related to the current position/state of an autonomous vehicle, can be considered autonomous vehicle features. Context features may include information about other objects within the scene, in a context area along or around the nominal path. One example is that there may be multiple tiled context areas along the nominal path in front of the autonomous vehicle. Each region can have a defined radius and length. Context features can also be determined for each one.

“In some cases, the environment surrounding the autonomous vehicle can be broken down into segments or bins. The nominal path for an autonomous vehicle can be broken down into multiple segments of different lengths, such as 10 m, 15 m, etc. Each segment is a context area for speed limit awareness. Each context area or segment can be used for information about objects, such as pedestrians and vehicles. ), path properties (e.g. nominal path geometrical property), road boundaries (e.g. distances to road/lane borders, etc. ), and/or similar. You can define context regions by defining start/end points on the nominal path and a radius around it that defines what objects are included in the context area.

“Some implementations of autonomous vehicle features (e.g. features that are determined with respect to the current autonomy vehicle state) may include one or more: a posted speed limit; distance from traffic control device (e.g. stop sign, yellow light and red light), etc. Distance from the nose of an autonomous vehicle to a nearest queue object (e.g. other vehicle, etc.). Along the nominal path; speed of closest queue object (e.g. speed toward path, speed on path, etc.). ); acceleration of nearest queue object (e.g. acceleration along path, acceleration towards path, etc. ; and/or similar.

“Context features, which are features that relate to a specific region of the nominal route, can include the following: distance from pedestrians; speed of nearest pedestrian on left/right, distance to nominal pathway of closest other pedestrian on left/right, distribution of pedestrians along the nominal trail; maximum curvature of nominal path in context area; closest distance between autonomous vehicle and road boundary in context region; average length to the right; distance to left; closest distance to road boundary to right and autonomous vehicle in context regional; average distance towards the right; actual camera image in the future path

“In particular, some implementations allow the vehicle computing system to divide the nominal path into multiple regions (e.g. n bins of length x) and compute statistics (e.g. associated with pedestrians and vehicles, etc.). Each region is located to the left and the right of the autonomous vehicle. The vehicle computing system can, for example, create a number bins for objects (e.g. pedestrians, vehicles, etc.). The vehicle computing system can create a nominal path and assign objects to the bins in order to calculate statistics and features. A vehicle computing system can determine the closest pedestrian and autonomous vehicle within a given area (e.g., within a bin). Some features, such as the autonomous vehicle features, can be determined without needing to be binned. These features may only appear once in an autonomous vehicle scene. You can also use a convolutional neural net feature extractor to obtain features in certain implementations.

“The vehicle computing system is capable of concatenating a plurality features into one feature vector. This can be used as input to a machine learning model that determines a maximum speed limit prediction, and/or a minimal offset from the nominal path prediction for autonomous vehicles. For example, the vehicle computing system can generate a feature vector of cat(autonomous_vehicle_features, context_features_region_1, context_features_region_2 . . . context_features_region_n) and input this feature vector into a machine-learned model to generate a speed limit prediction. As an example, the feature vector can comprise a concatenation of one or more autonomous_vehicle_features, such as posted speed limit, distance to stop sign, distance to yellow signal, distance to red signal, distance from front of vehicle to closet queue object in scene, speed of closest queue object toward nominal path, speed of closest queue object along nominal path, acceleration of closest queue object toward nominal path, acceleration of closest queue object along nominal path, and/or the like, as well as one or more context features for the plurality of regions/bins, such as average distance to pedestrians on left, average distance to pedestrians on right, speed of closest pedestrian on left, speed of closest pedestrian on right, distance to nominal path of closest pedestrian on left, distance to nominal path of closest pedestrian on right, count of pedestrians to left in bin 1, count of pedestrians to left in bin 2, count of pedestrians to left in bin 3, count of pedestrians to left in bin 4, count of pedestrians to right in bin 1, count of pedestrians to right in bin 2, count of pedestrians to right in bin 3, count of pedestrians to right in bin 4, and/or the like, as well as one or more context features for the plurality of regions/bins, such as average distance to other vehicles on left, average distance to other vehicles on right, speed of closest other vehicle on left, speed of closest other vehicle on right, distance to nominal path of closest other vehicle on left, distance to nominal path of closest other vehicle on right, count of other vehicles to left in bin 1, count of other vehicles to left in bin 2, count of other vehicles to left in bin 3, count of other vehicles to left in bin 4, count of other vehicles to right in bin 1, count of other vehicles to right in bin 2, count of other vehicles to right in bin 3, count of other vehicles to right in bin 4, minimum gap for objects in context region, maximum curvature along nominal path in context region, closest distance between road boundary and vehicle in context region on left, average distance between road boundary and vehicle in context region on left, closest distance between road boundary and vehicle in context region on right, average distance between road boundary and vehicle in context region on right, and/or the like. The feature vector comprising a concatenation of the plurality of autonomous_vehicle_features and context features can then be provided for use in determining a maximum speed limit prediction and/or a minimum offset from nominal path prediction for the autonomous vehicle.”

“In particular, the vehicle computing can calculate a maximum speed limit for an autonomous vehicle and/or a nominal offset for it based at most in part on its features. The vehicle computing system may include, use, or otherwise leverage a model such as a machine learning model. The machine-learned model, for instance, can include or be used to incorporate one or more models, such as neural networks (e.g. deep neural networks), and other multi-layer, non-linear models. Recurrent neural network can be made up of feed-forward neural networks and convolutional neural nets.

“For example, you can use labeled driving log data to train the model. To determine the maximum speed limit prediction, at least partially based on features associated with the context region and the current position of the autonomous vehicle. The vehicle computing system can input data indicative at least of the features (e.g. a feature vector) into a machine-learned model, and receive as an output data indicative of a maximum speed limit. Alternately, the vehicle computing can input data indicative at least of the features (e.g. a feature vector), into the machine-learned modeling and receive, in an output, data indicative a nominal path offset.”

“In some cases, a machine learned model can be used as a regression problem. The output of the machine-learned models is an exact speed limit. A machine-learned model could also be used to solve a classification problem. This would include a range (e.g. 0-25 mph) and a token. If the response falls outside the range, the output would be an exact speed limit. A speed limit range can be broken down into bins of 2.5 mph in width. Speed labels are used to assign the bins. Special labels are used when the labeler specifies that caution is not necessary (e.g., the context does not warrant the speed being reduced). If there is a chance of “no caution?” If the probability of?no caution? is greater than a threshold (e.g. indicating that there is no reason to slow down based on the scene), then the posted speed limit can be used. Alternately, the probability of “no caution?” is below the threshold. If the probability of?no caution? is lower than the threshold, then a probability spread can be calculated over the speed limit bins (e.g. the 2.5 mph increments). This will determine a maximum speed limit. The maximum speed limit can be determined by some implementations using the median of the probability distribution across the bins.

“More specifically, the machine-learned models can be trained with ground truth labels (e.g. speed labels based upon particular contexts/situations from one or more source so that the machine learned model can?learn. Depending on the situation, an autonomous vehicle can be driven at a speed that is suitable for it. Some implementations allow continuous labels to be added to the training data, rather than discrete ones, in order for machine-learned models to learn.

Driving logs can be used to generate model training data in certain implementations. Data can be captured, for example, when a vehicle changes its speed. This data can then be used to create driving sequences that can be used for labeling a machine-learned modeling. Another example is event data. This data can then be used to create driving sequences that can be used for training a machine learned model. Another example is manually driven vehicle data. This can be used to capture vehicle speeds, such as when the vehicle is driving below a posted speed limit, with/without other vehicles in front. The data can then be used for creating driving sequences that can be used for training a machine learning model. Another example is driving data that can be obtained from vehicles within a service fleet (e.g. a rideshare company), such as GPS and image data. These can then be analyzed and labeled in order to create training data sets. In some cases, driving data can also be used to generate training data, such as driving simulators that use a test track or simulate real-world scenarios that are more difficult to drive an autonomous vehicle.

“Also, some implementations could have ground truth label training data. This would be provided by a labeling team (e.g. human labelers), who review various events/situations from driving records and provide indications of a maximum speed and/or a target offset (from a nominal path), for each increment in time. Labelers may be given short clips of driving activity (e.g. video snippets and driving data) that they can review and give indications of the vehicle’s ability to travel at the speed the situation allows. Sometimes, human labelers may be able to ignore or filter out certain information from the snippet in order to make decisions, such as the speed of a slower vehicle in front, traffic lights, etc. Each time step in the snippet can be given a speed label by the human labeler. Sometimes, the human labeler may indicate that caution is not necessary (e.g. the autonomous vehicle can drive at a posted speed limit for a given time step). The human labeler can, for example, indicate the speed at which a passenger would be comfortable driving an autonomous vehicle in a given scenario when they review a snippet. Multiple labelers may be able to review each sequence clip and decide the appropriate speeds. This allows for the average of the different determinations when creating ground truth label data.

“In some cases, absolute value label extraction can be used to generate labeled training data. A labeler might review a brief log snippet (e.g. a one-minute snippet). This contains a speed context awareness scenario/situation. You can divide the log into smaller time intervals (e.g. time steps), and the labeler can provide speed labels for each time increment. As reference points, the labeler can provide the autonomous vehicle’s actual driving speed and any offset from nominal path in each region.

“In some cases, labeled training data can be generated with a feedback mechanism such as smart playback (e.g. video playback speed modification). A labeler might review a log snippet that includes a speed context awareness scenario/scenario, and then provide speed labels for increments in the snippet. The labeler can receive playback of the logsnippet at different playback speeds depending on speed labels to get feedback. This allows them to simulate driving at the labeled speed. The playback speed can be adjusted by the labeler to allow the driving events to proceed at the appropriate speed. Some situations are more complicated than others. Changing the playback speed can give the appearance that the vehicle is moving at an altered speed. If a front car is traveling at 10 mph and the labeler believes that 20 mph is possible, the playback rate could be increased to allow the labeler judge if the labeled speed is correct.

Referencing can be included in some instances when labeled training data is generated. A labeler might review a brief log snippet that includes a speed context awareness scenario. You can break down the log snippet into smaller segments and give the labeler a distinct label for each step. You could create a variety of categories, such as extremely cautious, moderately careful, light caution or no caution. A labeler for each category can be provided with reference videos showing situations in which a human driver drove through. These videos will highlight the type of situation and the appropriate speed associated with each category. A similar scheme of reference categories could also be used to determine nominal lane offset labels.

“Another example is that ground truth label training data can be obtained from some implementations based on operator data. An operator may modify the speed and/or in-lane shift settings of an autonomous vehicle while it is operating. Operators can, for example, set targets (e.g. speed target, offset goal) and then clear them when they are no longer needed. These overrides can be identified from driving data logs. For example, the operator can track override events and determine the start and end times of the override in driving data logs. These override situations can then be labeled and included in the training data sets to aid in training the machine-learned models. You can filter the override situations to reduce inappropriate/unnecessary use of overrides in some cases (e.g. an operator using modifications for reasons other that speed context awareness).

“Another example is that ground truth label training data can be obtained in certain implementations based on the labeling of human driving logs. In some cases, driving data logs from human drivers can be obtained from those who have been trained to drive well (e.g. as if the rider were in the vehicle). You can obtain data logs from human drivers and check for validity (e.g. driver operating the vehicle as expected). The logs can be labeled to identify areas where the human driver may be restricting vehicle speed. This is based on speed limit context awareness scenarios. To filter human driving data logs, for example, you can remove driving speeds that were reduced due to other context awareness scenarios. These labeled scenarios can be added to the training data sets for machine-learned models. Some implementations allow for labeling to include human driving logs that are generated using simulated driving scenarios such as using a test track or real street environments.

“Another example is ground truth label training data that could be generated from analysis of driver behavior in the environment around the autonomous vehicle. Sensors such as radar, lidar and image capture devices can capture data about the behavior of autonomous vehicles in specific situations. This data can be used to analyze it to find context awareness scenarios that can be added to training data.

“In some cases, the vehicle computing can set a speed limit or offset to the nominal vehicle path. In some cases, a machine-learned model (e.g. the autonomous vehicle features or the context features) can be used to determine the speed limit. Alternately, the machine learned model can also determine a target offset from an autonomous vehicle’s nominal path based at least partially on features (e.g. the autonomous vehicle features or the context features). The model output could indicate, for example, that the autonomous vehicle should travel at speeds below the posted speed limit based on its context. An example of this is a model output that indicates an autonomous vehicle can slow down (e.g. without being controlled by an operator) on certain situations, such as when there are many pedestrians or parked cars. Alternately, the model output may indicate that the autonomous vehicle should travel over a certain amount in the current travel path or change travel lanes based on its context.

“In particular, the machine learned model can output speed limit decisions in different scenarios and improve the safety driving behavior of an autonomous vehicle.” The vehicle computing system can, for example, determine whether a squeeze maneuver, an obstruction interaction, and/or context interaction might require the autonomous car to reduce its speed and/or implement an offset from the nominal vehicle’s path.

“A squeeze maneuver, for example, may require an autonomous vehicle to travel within a small area of free space created in the scene by other objects (e.g. pedestrians or other vehicles). The roadway and/or its properties. The vehicle computing system can determine that autonomous vehicle speed should be restricted as a function the region’s size (e.g. the squeeze gap size). The vehicle’s speed may also be affected by the types of objects (e.g. pedestrians versus vehicles) that form the border of the narrow area and their expected movements.

“Another example is an occlusion interaction, in which the autonomous vehicle must reduce speed or move over due to being near a space that is visible to the vehicle. For example, a bus stops at a crosswalk. As the autonomous vehicle approaches an occluded area, like a crosswalk, it may be necessary for the vehicle to reduce its speed and/or move over in order to avoid unseen objects such as pedestrians crossing the intersection. Another example is when an occlusion interaction occurs. For example, when the autonomous vehicle makes a right turn and the lane of travel is blocked, or when turning left at a stop sign, the vehicle will need to travel through an area that is blocked by traffic.

“Another example is when the context interaction requires the autonomous vehicle travel through a complex area, such as areas with many pedestrians traveling close to parked cars for a significant distance and the like. This is where an appropriate vehicle response can’t be determined by any individual actor, gap, or occlusion in the region.

“More specifically, in certain embodiments, the model may output a maximum speed limit value, which can then be used in motion planning by other components of vehicle computing system. The maximum speed limit value can be used in a cost function in a motion planning system to provide a modified speed limit that should be applied in the future (e.g. one second in future). You can control the autonomous vehicle’s operation (e.g. using one or more vehicle controls), so that it is within/below the speed limit at the future moment (e.g. in one second). Alternately, the vehicle computing can also predict the speed limit for one or several segments of the route based at most in part on the model output. One or more parameters can be used in some cases to limit the speed limit changes by the autonomous vehicle. These may include rules for limiting lateral jump, lateral deceleration and/or similar.

“In some cases, vehicle-to?vehicle communication may be used to increase the speed limit determination, such as by providing previews of future route segments. A first vehicle may provide information about the current route segment it is traveling to a routing software. The routing system can then provide information about that route segment for other vehicles approaching the segment. This information can be used to determine maximum speed limits and/or nominal paths offset values. In some cases, an autonomous vehicle receiving the information may use it to determine the appropriate speed limits for the segment or determine if a lane change would be appropriate for the segment.

The systems and methods described in this document can have a variety of technical benefits and effects. The vehicle computing system can detect the environment around the autonomous car and evaluate it. It can also adjust the speed and lane position of its autonomous vehicle according to the context. The vehicle computing system can perform such operations onboard the vehicle to avoid any latency issues when communicating with remote computing systems. As the vehicle moves along the nominal route, the vehicle computing system can perform iterative speed optimization. The vehicle computing system can be configured to proactively control the vehicle’s speed to minimize sudden changes and improve driving safety.

“The systems described herein may also result in improvements to vehicle computing technology that is responsible for autonomous vehicle operation. Aspects of the present disclosure, for example, can allow a vehicle computing system more efficiently and precisely control an autonomous vehicle’s movement by allowing smoother adjustment of travel speeds based upon analysis of context features along the nominal path. The systems and methods described herein are simpler and less expensive than other possible solutions. For example, it is not necessary to generate predictions for each object in a scene even though interactions with them have a low probability of happening.

“With reference to these figures, examples embodiments of this disclosure will be discussed further. FIG. FIG. 1 shows a block diagram for an example system 100 that controls the navigation of vehicle 102 in accordance with the examples of the present disclosure. The autonomous vehicle 102 can sense its surroundings and navigate with minimal to no human input. The autonomous vehicle 102 is a ground-based autonomous car (e.g., car or truck, bus etc.). The autonomous vehicle 102 can be a ground-based autonomous vehicle (e.g. car, truck, bus, etc.) or an air-based autonomous device (e.g. drone, helicopter, or any other aircraft), as well as other types of vehicles (e.g. watercraft). The autonomous vehicle 102 may be configured to operate in either a fully autonomous operational mode or a semi-autonomous operating mode. Fully autonomous (e.g. self-driving) can mean that the autonomous vehicle is capable of driving and navigation with little or no human interaction. Semi-autonomous (e.g. driver-assisted operational mode) can be one where the autonomous vehicle works with some interaction from a human driver.

“The autonomous vehicle (102) can have one or more sensors, 104, a vehicle computer system 106 and one or several vehicle controls 108. The vehicle computing system (106) can be used to control the autonomous vehicle 102. The vehicle computing system 106 can, in particular, receive sensor data from one or more sensors (104), attempt to understand the surrounding environment using various processing techniques, and then generate a suitable motion path through that environment. The vehicle computing system (106) can control one or more vehicle controls, 108 to control the autonomous vehicle 102 in accordance with the motion path.

“The vehicle computing system (106) can contain one or more processors 130, and at most one memory 132. Any suitable processing device can be used as the one or more processors 130 (e.g., processor core, microprocessor, ASIC, FPGA, controller, microcontroller, etc.). It can contain one processor, or multiple processors that are operatively linked. Memory 132 may contain one or more nontransitory computer-readable storage medias. These mediums include RAM, ROM and EEPROM, EPROM flash memory devices, magnetic discs, and combinations thereof. The memory 132 may store data 134, instructions 136 that are executed by processor 130 to enable vehicle computing system (106) to perform operations. One or more processors 130, at least one memory 132, and one other memory 132 could be included in some implementations. For example, computing device(s), 129 within vehicle computing system 106.

“In some cases, the vehicle computing system (106) can be further connected to or included in a positioning system 120. The current geographical location of an autonomous vehicle can be determined by positioning system 120. Any device or circuitry that analyzes the position of an autonomous vehicle 120 is considered the positioning system 120. The positioning system 120, for example, can use a satellite navigation system (e.g. A GPS system, a Galileo navigation satellite system (GLONASS), or the BeiDou Satellite Navigation & Positioning system), a GPS system, a Galileo position system, a Galileo positioning systems, the GLObal Navigation Satellite system (GLONASS), a BeiDou Satellite Navigation & Positioning system), an inertial navig system, and a dead reckoning method, based on IP addresses, triangulation, proximity to cellular towers and WiFi hotspotspots and/or other techniques of cellular towers and/Wi/or cellular towerspotspotspotspotspotspotspotspotspotspotspotspotspotspotspots and/or other methods for determining the vehicle’spotspotspotspotspotspotspotspotspotspotspotspotspotspots/or other suitable techniques to determine the location. The vehicle computing system 106 can use the position of the autonomous car 102.

“As illustrated at FIG. “As illustrated in FIG. In some implementations, the vehicle computing system 106 can also include a feature extractor/concatenator 122 and a speed limit context awareness machine-learned model 124 that can be provide data to assist in determining the motion plan for controlling the motion of the autonomous vehicle 102.”

“In some cases, the perception system 110 may receive sensor data from one or more sensors (104), that are connected to or otherwise integrated within the autonomous vehicle (102). One or more sensors 104 could include a Light Detection and Ranging system (LIDAR), a Radio Detection and Ranging system (RADAR), one or more cameras (e.g. visible spectrum cameras, infrared camera, etc.). Other sensors, and/or light detection systems. Sensor data may include information about the location of objects in the environment surrounding the autonomous vehicle 102.

LIDAR systems can, for example, include sensor data that includes the location (e.g. in three-dimensional space relative the LIDAR, or the number of points that correspond with objects that have reflected the ranging laser. LIDAR systems can be used to measure distances. This is done by measuring the Time of Flight (TOF), which measures how long it takes for a laser pulse to travel between the sensor and an object. The distance from the speed of light can then be calculated.

“Another example is for RADAR systems, the sensor data may include the location (e.g. in three-dimensional space relative RADAR network) of a certain number of points that correspond with objects that reflect a ranging radiowave. Radio waves, whether continuous or pulsed, can be transmitted by RADAR systems and reflect off objects. This information gives information about their speed and location. The RADAR system can give useful information about an object’s speed.

“Another example is that for one or more of the cameras, different processing techniques (e.g. range imaging techniques such a, for example structure from motion structured light stereo triangulation and/or other methods) can be used to identify the location (e.g. in three-dimensional space relative the one or two cameras) of a certain number of points that correspond with objects in imagery captured by one or more of the cameras. Other sensors can also identify the locations of points that correspond with objects.

“The one or more sensors (104) can be used for collecting sensor data that includes information about the location (e.g. in three-dimensional space relative the autonomous vehicle 102) and the objects that correspond to them within the environment of the autonomous car 102.”

“In addition to sensor data, the perception 110 can also retrieve or otherwise obtain map data (118) that provides detailed information about surrounding environments of the autonomous vehicle.102. The map data 118 may include information about the location and identity of various travelways (e.g. roadways), segments, buildings or other objects (e.g. lampposts or crosswalks). ); directions and location of traffic lanes (e.g. the direction and location of a parking lanes, turning lane or bicycle lane or any other lanes within a specific roadway or travelway); traffic control data. (e.g. the location and instructions for signage, traffic lights or other traffic control devices); or any other map data that aids the vehicle computing system (106) in understanding and perceiving the surrounding environment and its relationship to it.

“The perception system 110 is able to identify one or more objects that are close to the autonomous vehicle102 using sensor data from one or more sensors (104 or 118) and/or map data 118. The perception system 110 can, in certain implementations, determine the object’s current state using sensor data from one or more sensors 104 and/or map data 118. The state data of each object can be used to describe the object’s current location (also known as position); current speed (also known as heading); current acceleration (also known as velocity); current heading; current orientation; size/footprint (e.g. as represented by a bounding form such as a bounding pologon or polyhedron); type (e.g. vehicle versus pedestrian; bicycle versus other); yawrate; and/or any other state information.

“In certain implementations, the perception 110 can determine the state data for each object through a series of iterations. The perception system 110, in particular, can update each object’s state data at every iteration. The perception system 110 can track objects, such as pedestrians, bikes, and vehicles, that are within a short distance of the autonomous vehicle.

“The prediction system 112 can read the state data from perception system 110 to predict one or more future locations of each object. The prediction system 112 can, for example, predict where an object will be within the next five seconds, 10 seconds or 20 seconds. An object can be predicted to follow its current trajectory based on its speed. Another example is modeling or other sophisticated prediction techniques.

“The motion planning software 114 can create a motion plan for an autonomous vehicle 102 using at least part of the predicted future location for the object (provided by the prediction system 112 or the state data provided by the perception 110). The motion planning system 114, which has information about current and predicted future locations for objects, can create a motion plan to guide the autonomous vehicle (102) relative to such objects.

“As an example, the motion planning software 114 can calculate a cost function for each candidate motion plan for the autonomous vehicle 101 based at minimum in part on the locations and/or predicted future positions of the objects. The cost function, for example, can be used to describe the cost of adhering (e.g. over time) to a candidate motion plan. The cost function could describe an increase in cost if the autonomous vehicle 102 is close to another object or deviates from a preferred route (e.g., preapproved path).

“The motion planning system can calculate the cost of adhering a candidate pathway based on information such as the locations of current and/or anticipated future objects. The cost function(s) can be used by the motion planning system to select or determine the motion plan for the autonomous car 102. The candidate motion plan that minimizes cost functions can be chosen or determined in other ways, for example. The motion planning system (114) can transmit the selected motion plan to a vehicle control 116, which controls one or more vehicle controls. 108 (e.g. actuators or other devices that regulate gas flow, acceleration and steering, etc.). To execute the selected motion plan.

“In some implementations, the vehicle computing system 106 can include a feature extractor/concatenator 122. The feature extractor/concatenator 122 can extract features regarding the autonomous vehicle state and the surrounding environment of the autonomous vehicle for use in enabling speed limit context awareness in the motion planning. The feature extractor/concatenator 122 can receive feature data (e.g., features relative to objects in a context region around the nominal path and/or features that are relative to the vehicle current position), for example, from the perception system 110, the prediction system 112, and/or the motion planning system 114, based at least in part on the object state data, map data, and/or the like. The feature extractor/concatenator 122 can divide a portion of the nominal path of an autonomous vehicle into a plurality of regions (e.g., n bins of x length) and compute statistics and features (e.g., associated with pedestrians, vehicles, road boundaries, etc.) Each region/bin is shown. Additionally, the feature extractor/concatenator 122 can determine features associated with the autonomous vehicle position/state, which may appear a single time within a current scene and not be divided among the bins. The feature extractor/concatenator 122 can concatenate the plurality of feature data into a feature vector for use as input to a machine-learned model.”

“In certain implementations, the vehicle computing systems 106 may include a speed limit context-learned model (124). The context awareness machine-learned model 124 can provide speed limit context awareness predictions, based on features regarding the autonomous vehicle state and the surrounding environment of the autonomous vehicle, that can be provided to the motion planning system 114 for use in determining/adjusting a motion plan for the autonomous vehicle 102. For example, the context awareness machine-learned model 124 can receive a feature vector as input, for example, from the feature extractor/concatenator 122. The context awareness machine-learned 124 can determine a maximum speed limit that will be applied to the autonomous vehicle (102) at a later time while it is traveling along the nominal path. The context awareness machine-learned 124 can also predict the speed limit that will be applied to each segment of the route ahead of the autonomous car 102. The context awareness machine-learned 124 can also predict a target offset from a nominal path.

“In some implementations, the feature extractor/concatenator 122 and/or the context awareness machine-learned model 124 may be included as part of the motion planning system 114 or another system within the vehicle computing system 106.”

“Each of the perception system 110, the prediction system 112, the motion planning system 114, the vehicle controller 116, the feature extractor/concatenator 122, and the speed limit context awareness machine-learned model 124 can include computer logic utilized to provide desired functionality. In some implementations, each of the perception system 110, the prediction system 112, the motion planning system 114, the vehicle controller 116, the feature extractor/concatenator 122, and the speed limit context awareness machine-learned model 124 can be implemented in hardware, firmware, and/or software controlling a general purpose processor. For example, in some implementations, each of the perception system 110, the prediction system 112, the motion planning system 114, the vehicle controller 116, the feature extractor/concatenator 122, and the speed limit context awareness machine-learned model 124 includes program files stored on a storage device, loaded into a memory, and executed by one or more processors. In other implementations, each of the perception system 110, the prediction system 112, the motion planning system 114, the vehicle controller 116, the feature extractor/concatenator 122, and the speed limit context awareness machine-learned model 124 includes one or more sets of computer-executable instructions that are stored in a tangible computer-readable storage medium such as RAM hard disk or optical or magnetic media.”

“FIG. “FIG.

“An autonomous vehicle may include one or more computing devices as well as various subsystems, which can work together to sense the environment and create a motion plan that will control the vehicle’s motion. FIG. FIG. 2 shows that an autonomous vehicle environment 200 can include a number of objects and/or roadway elements along a nominal route 204 of an autonomous car 202. The autonomous vehicle may be able to detect and/or classify multiple objects along/around the nominal route 204. This could include queue objects (e.g. queue vehicle 206 in front of the autonomous vehicle 202), other vehicles (e.g. stationary vehicles 208) and/or traffic control devices (e.g. stop sign 210). The present disclosure provides systems and methods that can gather information about the context surrounding the autonomous vehicle. This includes a variety of features associated with the autonomous car context. These features may include information regarding aggregate information about objects within a region around the nominal vehicle (e.g. pedestrians, vehicles and/or path boundaries) and/or features relative to the vehicle’s current position (e.g. posted speed limit, distances from traffic control devices, distances between queued objects and/or other objects and/or so on).

“In some implementations, an autonomous vehicle surround environment 200 (e.g. a nominal path for autonomous vehicles and a radius around the vehicle and the path), can be broken down into segments or bins (e.g. n bins of length). FIG. FIG. 2 shows how the environment 200 can be broken down into context bins with a defined length. For example, context 221, context 222, and context 223, respectively. For example, context 221, context 223 and context 223 can be used for information about features such as objects (e.g. pedestrians, vehicles and/or thelike), path properties (e.g. nominal path geometrical property), road boundaries (e.g. distances to road/lane borders, etc. ”

“In particular, the autonomous car 202 (e.g. the vehicle computing system), can identify features within each bin and compute statistics as well as features inside each bin. FIG. 2 illustrates an example. 2. The autonomous vehicle 202 is able to divide the nominal path into multiple bins. The autonomous vehicle 202 can recognize that context 221 includes two stationary vehicles (208). The autonomous vehicle can identify that context 222 includes a queued vehicle (206), two stationary vehicles (208), and four pedestrians (212). The autonomous vehicle 202 can also determine that certain objects are not within the radius of the vehicle 202 or the path 204. This could include pedestrians 214 and two stationary vehicles 208.

“The autonomous vehicle202 can compute statistics and features for these objects, as well as roadway properties, such as distance to pedestrians on each side, speed of nearest pedestrian on each side, distance from nominal path for the closest vehicle on each side, distance between vehicles on each side, speed of closest vehicle on each side, distance between autonomous vehicles in the context region and distance to each other, maximum curvature of nominal path in the context region, closest distance in relation to road boundary and autonomous vehicle in the context region and average to the right, distance between the right and the vehicle in the context region and the right, and the vehicle in the autom and the vehicle in the context region and the right, the autonomous vehicles in the region and the right and the vehicle in the context region and the right, the road border to the right and the vehicle in the context region and the right and the vehicle in the context and average to each other and the right, the road region and the vehicle in the right and the vehicle in the region and the road to the road to the road and the road to the right

“Additionally, an autonomous vehicle 202 may determine autonomous vehicle features that may be occurring once within the scene, such as a posted limit, distance to traffic controller device, distance between the nose of the vehicle and a nearest queue object along the nominal route, speed of closest queue objects, acceleration of closest queue objects, and/or other similar information.” The autonomous vehicle 202 can determine autonomous vehicle features relative the queued vehicle (206) and/or stop sign (210) within the autonomous car environment 200.

“FIG. 3A-3C, FIG. 4A-4B and FIG. FIG. 4A-4B and FIG. 5 show some examples of situations where the autonomous car may not be able to operate at a posted speed limit on a nominal route. The context around the autonomous vehicles may make it difficult or impossible for the vehicle to operate at that speed limit. Accordingly, the systems and methods described in the present disclosure could determine a speed limit below the posted speed limit.

“An autonomous vehicle might need to travel through a narrow area of the nominal route (e.g., a squeeze maneuver) because of objects or properties in the nominal path. It may therefore be desirable that the vehicle travels through this narrow area at a slower speed. FIG. FIG. 3A through FIG.

FIG. FIG. 3A shows that an autonomous vehicle 302 must travel in a narrow area of free space. This is because the vehicle may need to pass through gap region 312, which can be created by moving objects 304 (e.g. a moving vehicle) or one or more stationary vehicles 306 The autonomous vehicle 302 can, via a vehicle computing device, determine that the autonomous vehicle’s speed should be restricted as a function the size of the gap region 312. The autonomous vehicle’s speed may also be restricted based on the types of objects (e.g. moving vehicle 304, parked vehicles 306) that form the boundary of the gap area 312 and their expected movements.

“As illustrated at FIG. FIG. 3B in context scenario300B shows that the nominal path for an autonomous vehicle 302 might require it to travel through a narrow area of free space. For example, the gap region 314, which is created by a moving vehicle, 304, and a road border 308 (e.g. a roadway curb, etc.). The autonomous vehicle 302 can, e.g. via a vehicle computing software, determine that the autonomous car speed should be restricted as a function both of the size of the gap region 314, and the type of objects (e.g. vehicle 304 and boundary 308.

“As illustrated at FIG. FIG. 3C shows that an autonomous vehicle 302 must travel in a narrow area of free space. In this case, it may be required to travel through the gap region 316. This is created by a moving vehicle (304) and pedestrians (310) near the road border 308. An autonomous vehicle 302 can, e.g. via a vehicle computing device, determine that the autonomous vehicle speed should not exceed the gap region 316’s dimensions as well as the type (e.g. vehicle 304 and pedestrians 302, forming the gap area 316. In scenario 300C, for example, since the gap region 316 is bound by a plurality pedestrians 310 the autonomous vehicle 302 may determine that the vehicle’s speed should be decreased in scenario 300C more than in scenarios 300A or 300B, where the gaps region boundaries are other vehicles/roadway boundaries.

“Another example is that some areas of a nominal route may have one or more obstructions (e.g. large vehicles, buildings signs, large vehicles, parked vehicles, etc.). These obstructions can limit visibility in the area and it might be advantageous for an autonomous vehicle to travel at a slower speed through those areas. FIG. FIG. 4A and FIG.

FIG. “As illustrated in FIG. An example is when an autonomous vehicle 402 approaches traffic control devices, such stop sign 406, where the nominal path 404 requires a left turn and an occluded area 412 may occur due to other vehicles, such stop sign 406. FIG. FIG. 4A illustrates that the occluded area 412 is between vehicle 410 and stopped vehicle 408 along the nominal route 404. This occurs because vehicle 406 has been stopped at stop sign 406, reducing visibility for autonomous car 402 following the left turn. This may lead to autonomous vehicle 402 traveling slower.

“Additionally as illustrated in FIG. “Additionally, as illustrated in FIG. An example is that autonomous vehicle 402 might be near an intersection, crosswalk or other similar location on a nominal route 414. It may also have to pass large vehicles such as bus 416. The autonomous vehicle could have difficulty seeing potential objects, such as pedestrians, because it is located at a bus stop where passengers are unloading. One or more pedestrians could be entering the crosswalk from the occluded area 418. This may indicate that the autonomous vehicle’s travel speed is reduced to avoid an interaction with an unseen object (e.g. a pedestrian).

“FIG. “FIG. FIG. FIG. 5 is a context scenario 500. A nominal path for an autonomous vehicle 502 might require it to travel through complex areas, such as busy streets with narrow travel lanes, pedestrians 508, and other moving vehicles 504 or the like. FIG. FIG. 5 illustrates how an autonomous vehicle 502 could travel a nominal route 503 between another vehicle 504 or a variety of parked vehicles 506. The parked vehicles 506 may cause occluded areas, such as the occluded areas 510, 514 and 512 to occur along the nominal route 503. Due to the occluded areas, pedestrians 508 may be travelling near or on the nominal path 503 but may not be visible by the autonomous vehicle 502. A pedestrian 508 might be moving between parked cars 506 in the occluded area 512 to reach the nominal path 503. This could allow the pedestrian to return to their vehicle. However, the pedestrian may not be seen due to the obstruction.

“FIG. “FIG. One or more of the 600 operations can be executed by one or more computing devices, such as the FIG. 106 vehicle computing system. 1. The computing system 1102 (FIG. 11. The computing system 1130 in FIG. 11, or something similar. Furthermore, one or more of the 600 operations can be implemented as an algorithm on any of the hardware components of a device (e.g. as shown in FIGS. To, for instance, provide speed limit context awareness while autonomous vehicle operation, 1 and 11 are available.

“At 602, one of the computing devices within a computing device can obtain a plurality feature for a scene along a nominal path of an autonomic vehicle. A computing system, such as an autonomous vehicle computing systems, can gather information (e.g. sensor and/or data) about the environment around the autonomous car and determine a plurality features that are associated with that context. The computing system can, for example, obtain aggregate information about objects within a region around the nominal vehicle path (e.g. pedestrians, vehicles and path boundaries) and/or features relative to the vehicle’s current position (e.g. posted speed limit, distances from traffic control devices, distances between queued objects and/or other objects and/or similar).

“At 604, the computing device can determine a context reaction for the autonomous car based on a plurality of features (e.g. context features and autonomous vehicles features in a given scene). The context response may include at least a derived speed limit for the autonomous vehicle in some cases. In some cases, the context response can include a predicted maximum speed limit of the autonomous car. This is based at minimum in part on a feature vector, which is a combination of the context features and the autonomous vehicle features. Alternately, the machine learning model could predict a target offset from an autonomous vehicle’s nominal path. This information would be included in the context response. It will be based at most in part on a feature matrix. The context response could indicate, for instance, that the autonomous car should travel at a speed lower than the posted speed limit based on its context. It could also indicate that the autonomous vehicles should adjust their lane position or provide a target offset from what will be applied in the future.

“At 606, a computing system can provide the context reaction, which could include, for instance, a derived limit speed constraint (e.g. a maximum speed constraint for an autonomous vehicle), for use in determining the motion plan for the vehicle. For example, the motion planning system 112 may be used. An autonomous vehicle motion plan can, for example, slow down the vehicle or adjust the autonomous vehicle’s lane position independently based on context response data. This is possible in certain situations, such as when there are many pedestrians and/or cars parked on the streets.

“FIG. 7 shows a flowchart diagram for example operations 700 that provide speed limit context awareness during the operation of an autonomous car according to examples of the present disclosure. One or more of the operations 700 may be executed by one or more computing devices, such as the FIG. 106 vehicle computing system. 1. The computing system 1102 (FIG. 11. The computing system 1130 in FIG. 11, or something similar. Furthermore, one or more of the operations 700 may be implemented as an algorithm on any of the hardware components of a device described herein (e.g. as in FIGS. To, for instance, provide speed limit context awareness while autonomous vehicle operation, 1 and 11 are available.

“At 702, one of more computing devices within a computing systems can obtain a part of a nominal track of an autonomous vehicle. For example, the nominal path within the current scene of the autonomous car, such as the nominal path 204 illustrated at FIG. 2.”

“At 704, the computing device can divide the nominal path into multiple bins or segments. The surrounding environment can be broken down into segments or bins. The nominal path for an autonomous vehicle can be broken down into multiple segments of different lengths, such as 10m, 15m, etc. Each segment is a context area for speed limit awareness.

“At 706, each bin/segment can be used to compute context features. Each segment or bin can be used for information about objects, such as pedestrians and vehicles. ), path properties (e.g. nominal path geometrical property), road boundaries (e.g. distances to road/lane borders, etc. ) and/or similar. The computing system can calculate aggregate statistics and features of each bin. The computing system can determine the closest pedestrian and closest autonomous vehicle within a given area (e.g. inside a bin). The computing system can also determine autonomous vehicle features that are associated with the current location of the autonomous car. These features are not limited to one bin.

“At 708, the computing device can combine a plurality of features (e.g., context features and autonomous vehicle features) into a feature vector that can be used as input to a machine learning model. The computing system can, for example, combine a plurality of features (e.g. context features and autonomous car features) into one feature vector that can be used as input to a machine learning model. This will provide autonomous vehicle speed limit context awareness. For example, the computing system can generate a feature vector of cat(autonomous_vehicle_features, context_features_region_1, context_features_region_2 . . . context_features_region_n). This feature vector can be used as an input to a machine-learned modeling.

“FIG. 8A shows a flowchart diagram for example operations 800A to provide speed limit context awareness during autonomous vehicle operation according to the example embodiments in this disclosure. One or more of the operations 800A may be executed by one or more computing devices, such as the vehicle computing system (106) of FIG. 1. The computing system 1102 (FIG. 11. The computing system 1130 in FIG. 11, or something similar. Furthermore, one or more of the operations 800A may be implemented as an algorithm on any of the hardware components of a device (e.g. as shown in FIGS. To, for instance, provide speed limit context awareness while autonomous vehicle operation, 1 and 11 are available.

“At 802, one of the computing devices within a computing device can obtain a plurality feature for a scene along a nominal path of an autonomic vehicle. A computing system, such as an autonomous vehicle computing systems, can gather information (e.g. sensor and/or data) about the environment around the autonomous car and determine a plurality features that are associated with the autonomous vehicles context. The computing system can, for example, obtain aggregate information about objects within a region around the vehicle’s nominal path (e.g. pedestrians, vehicles and path boundaries) and/or features relative to the vehicle’s current position (e.g. posted speed limit, distances from traffic control devices, distances between queued objects and/or other objects).

“At 804, the computing device can generate a feature-vector based on a plurality of features. The computing system can, for example, combine a plurality of features (e.g. context features and autonomous car features) into one feature vector that is input to a machine learning model. This will provide context awareness to the vehicle’s speed limit. For example, the computing system can generate a feature vector of cat(autonomous_vehicle_features, context_features_region_1, context_features_region_2 . . . context_features_region_n).”

“At 806, a computing system can provide a feature vector as input for a trained machine learned model (e.g., one that has been trained for driving speed predictions for areas of the vehicle’s nominal route based at least partially on the obtained features). This data will be used to generate machine-learned models output data to provide speed limit context awareness. A machine-learned machine model in which a feature vector is input at 806 could correspond, for instance, to FIG. 124. 1. Machine-learned model 1110 (FIG. 11, and/or machine learned model 1140 of FIG. 11.”

“At 808, a computing system can receive maximum speeds limit data (e.g. a predication about a maximum speed limit of the autonomous vehicle) in the output of the machine learned model. In some cases, a machine learned model can calculate a speed limit for an autonomous vehicle using at least part of the feature vector (e.g. the features of the autonomous vehicle and the context features). The model output could indicate, for instance, that the autonomous vehicle should travel at least the posted speed limit based on the context. It can also provide a maximum speed limit (e.g. driving speed constraint) that will be applied in the future.

“At 810 the computing system can provide maximum speed limit data to be used in determining an autonomous vehicle’s motion plan, for example by the motion planning software 114. An autonomous vehicle motion plan can be based on model output and could slow the vehicle in certain situations, such as when there are many pedestrians or parked cars.

“FIG. 8B shows a flowchart diagram for example operations 800B to provide speed limit context awareness during the operation of an autonomous car according to examples of the present disclosure. One or more of the operations 800B may be executed by one or more computing devices, such as the vehicle computing system (106) of FIG. 1. The computing system 1102 (FIG. 11. The computing system 1130 in FIG. 11, or something similar. Furthermore, one or more of the operations 800B may be implemented as an algorithm on any of the hardware components of a device (e.g. as shown in FIGS. To, for instance, provide speed limit context awareness while autonomous vehicle operation, 1 and 11 are available.

“At 822 one or more computing devices can get a plurality feature for a scene along a nominal path of an autonome vehicle. A computing system, such as an autonomous vehicle computing systems, can gather information (such as sensor data and/or map data) about the environment around the autonomous car and determine a plurality features that are associated with that context. The computing system can, for example, obtain aggregate information about objects within a region around the vehicle’s nominal path (e.g. pedestrians, vehicles and path boundaries) and/or features relative to the vehicle’s current position (e.g. posted speed limit, distances from traffic control devices, distances between queued objects and/or other objects).

Click here to view the patent on Google Patents.