Invented by Kecheng XU, Yajia ZHANG, Hongyi Sun, Jiacheng Pan, Jinghao Miao, Baidu USA LLC
The Baidu USA LLC invention works as followsIn response, a moving object is detected. One or more possible paths are then determined, based upon the previous movement predictions. This can be done, for instance, by using a machine learning model that was created from a large number of driving statistics. A set of trajectory options is generated for each possible object path based on predetermined accelerations. Each trajectory candidate corresponds to a predetermined acceleration. Each of the candidates is given a trajectory cost using a cost function that has been predetermined. The candidate with the lowest cost is chosen. The ADV is navigated to avoid collisions with moving objects based on the least expensive possible paths.
Background for Method of predicting the movement of moving objects in relation to an autonomous driving car
Vehicles operating in autonomous mode, e.g. driverless, can relieve drivers, and especially occupants of some driving-related duties. In autonomous mode, vehicles can use onboard sensors to navigate to different locations. This allows the vehicle to travel without passengers or with little human interaction.
Motion control and planning are crucial operations in autonomous driving. When autonomous driving is used, the vehicle’s driving environment surrounds the object that is moving. This allows the vehicle to predict the object’s movement. Motion planning can be done based on movement predictions. This prediction, however, is not enough to accurately predict future movements of the moving object.
The following details will describe the different embodiments of the disclosures. The accompanying drawings illustrate these various embodiments. The following descriptions and drawings are intended to illustrate the disclosure, and not as a limitation. To provide a comprehensive understanding of different embodiments of this disclosure, numerous specific details are provided. In some instances, however, well-known details or conventional ones are left out in order to give a concise description of embodiments in the present disclosures.
Reference to one embodiment” in the specification or ?an embodiment? The phrase “in one embodiment” means that the feature, structure or characteristic described with respect to the embodiment may be present in at least a single embodiment of the disclosure. The phrase “in one embodiment” appears several times in the specification. The phrase “in one embodiment” may appear in different places in the specification.
Accordingly, in some embodiments, following the normal prediction of an object moving, a post-analysis is performed based on the current state (such as the relative position, speed and heading direction) of an autonomous driving car (ADV). This analysis is used to improve or adjust the prediction of the object. The interaction between the ADV and the moving objects is then taken into account to improve the prediction of movement.
Accordingly, one embodiment is that, upon perceiving an object moving (e.g. a vehicle), based on previous movement predictions, one or more possible paths of the object moving are predicted or determined, for example using a machine learning model which can be created on the basis of a large number of driving statistics from different vehicles. A set of trajectory candidate is generated for each of the possible paths based on predetermined accelerations. Each trajectory candidate corresponds to a predetermined acceleration. Each of the candidate trajectory costs is calculated using a cost function that has been predetermined. The candidate with the lowest cost is chosen to represent the possible path of the object. The ADV is guided to avoid collision by planning an ADV path based on the lowest cost of possible object paths.
In one embodiment, when calculating a cost for each trajectory candidate, a centripetal speed cost and a cost of collision are calculated. The cost of each trajectory candidate is calculated based on both the centripetal and collision costs. In order to calculate the centripetal speed cost of a candidate trajectory, a number of points are selected along the candidate trajectory. The trajectory points can be distributed evenly in time along the candidate trajectory. Each trajectory point is assigned a centripetal speed. Centripetal acceleration can be calculated based on speed and curvature of the moving object at the time point associated with the trajectory. A first cost function is used to calculate the centripetal accelerator cost based on all of the trajectory points’ centripetal speeds. When calculating the collision cost of a candidate trajectory, it is necessary to determine the relative distance between a vehicle and each trajectory point. The trajectory candidate’s collision cost is calculated based on relative distances of the ADV and each trajectory point using a second function.
According to another embodiment, for each possible object path, a probability value is calculated based on the least expensive trajectory. The likelihood value represents the likelihood that an object moving will follow the chosen trajectory candidate. The likelihood value is combined with a previous probability for the chosen trajectory candidate. The above operations can be performed iteratively for each moving object perceived within a certain proximity to the ADV. The ADV is designed based on all moving objects’ probabilities.
FIG. The block diagram in Figure 1 illustrates a configuration of an autonomous vehicle network according to a particular embodiment. Referring to FIG. 1. Network configuration 100 includes an autonomous vehicle 101 which may be communicatively connected to one or multiple servers 103-104 via a network 102. There is only one autonomous vehicle in the picture, but multiple autonomous vehicles may be connected to each other or to servers 103 – 104 via network 102. Network 102 can be any network type, such as a local network (LAN), a WAN such as the Internet or a cellular or satellite network. Servers 103-104 can be any type of server or cluster of servers such as Web servers or cloud servers. They could also be application servers or backend servers. Servers 103 and 104 can be data analytics servers or content servers. They could also be traffic information servers.
An autonomous car is a vehicle which can be configured in an autonomous mode, in which it navigates in an environment without much or any input from the driver. A vehicle that is autonomous can have a sensor system with one or more sensors designed to detect information regarding the environment where the vehicle operates. The vehicle, and any controllers associated with it, use the information detected to navigate in the environment. The autonomous vehicle 101 can be operated in manual mode, full autonomy mode or partial autonomy mode.
In one embodiment, autonomous vehicles 101 include, but are not limited to: perception and planning system, vehicle control system, wireless communication system, user interface system, infotainment, and sensor systems. The autonomous vehicle 101 can also include common components found in normal vehicles such as an engine, wheels steering wheel transmission etc. These components may be controlled using communication signals and/or command, including, for example, deceleration and acceleration signals, steering and braking commands etc.
Components can be connected to each other by an interconnect, bus, network or combination of these. Components 110-115, for example, may be communicatively linked via a controller network (CAN). A CAN bus, also known as a vehicle bus, is designed to enable microcontrollers to communicate without the need for a host computer. The protocol is message-based and was originally designed for multiplex wiring in automobiles. However, it is now used in many different contexts.
Referring to FIG. In one embodiment, sensor 115 comprises, but is not limited, to one or more cameras, a global positioning system unit (GPS), an inertial measuring unit (IMU), a radar unit 214 and a light detection range (LIDAR). GPS system 212 can include a transmitter that provides information about the location of the autonomous vehicle. IMU unit 213 can sense changes in the position and orientation of the autonomous vehicle using inertial acceleration. Radar unit 213 may be a system which uses radio signals to detect objects in the local environment. In certain embodiments, the radar unit 214 can also sense the speed or heading of objects in addition to sensing them. LIDAR unit 214 can use lasers to detect objects in the area where the autonomous vehicle is positioned. LIDAR unit 211 could include one of more laser sources, laser scanners, and detectors. Cameras 211 can include one or multiple devices that capture images of the surrounding environment. Cameras 211 can be either video or still cameras. A camera can be mechanically moveable, such as by mounting it on a rotating or tilting platform.
Sensor System 115″ may include additional sensors such as a sonar, an infrared, a steering, a throttle, a brake sensor and an audio (e.g. microphone) sensor. Audio sensors can be used to collect sound from the surrounding environment. A steering sensor can be used to detect the steering angle on a wheel or wheels of the vehicle. A braking sensor and throttle sensor detect the braking and throttle positions of the vehicle. A throttle sensor and brake sensor can be combined as a single integrated sensor in some cases.
In one embodiment, the vehicle control system 111 may include, but not be limited to, steering units 201 and 202, (also known as acceleration units), as well as braking units 203. The steering unit 201 adjusts the direction of the vehicle. The throttle unit 202 controls the speed of motor or engine, which in turn controls speed and acceleration. Braking unit (203) is used to slow down the vehicle using friction. The components shown in FIG. “2 may be implemented as hardware, software or a combination of both.
Referring to FIG. Wireless communication system 112 will allow communication between autonomous vehicles 101 and external systems such as devices or sensors. Wireless communication system 112 could, for example, communicate wirelessly with one or multiple devices, directly or through a communication network such as servers 103 and 104 on network 102. Wireless communication system 112 may use a cellular network or wireless local area network, e.g. WiFi, to communicate with other components or systems. Wireless communication system 112 can communicate directly with devices (e.g. a mobile phone of a passenger or a display device within vehicle 101) using Bluetooth, infrared links, etc. The user interface system 113 can be implemented as a peripheral device within vehicle 101, such as a keyboard or touch screen display, microphone, speaker, etc.
Perception and planning system (110) can control or manage some or all functions of autonomous vehicle 102, particularly when it is operating in autonomous driving mode. Perception and Planning System 110 comprises the hardware (e.g. processors, memory, storage), and software (e.g. operating system, routing and planning programs) necessary to receive and process information from sensor system (115), control system (111), wireless communication system (112) and/or user-interface system (113), plan a path or route from a starting to a destination, and drive vehicle 101 using the information. The perception and planning system may also be integrated into the vehicle control system.
For example a passenger can specify the starting point and destination of a journey, for instance, through a user interface. The trip-related data is obtained by the perception and planning system 110. Perception and planning system 100, for example, may receive location and route data from an MPOI Server, which can be part of servers 103 and 104. Location services are provided by the location server, and map services as well as POIs for certain locations are provided by the MPOI servers. This location and MPOI data can also be cached in the persistent storage device 110 of the perception and planning system.
While autonomous vehicle 100 is traveling along the route, the perception and planning system may also receive real-time traffic data from a traffic system or server. Servers 103-104 can be operated by third parties. The functionalities of the servers 103 and 104 can be integrated into perception and planning system. Perception and planning system can plan an optimal route based on real-time traffic data, MPOI and location information as well as local environment data sensed or detected by sensor system (e.g. obstacles, objects and nearby vehicles) via control system 111.
Server” 103 can be a data analysis system that performs data analytics for various clients. Data analytics system 103 may include machine learning engine and data collector. Data collector 121 collects 123 driving statistics from vehicles that are either driven by humans or autonomous vehicles. Driving statistics 123 includes information on the driving commands (e.g. throttle, brake, and steering commands) that were issued, as well as the responses of the vehicle (e.g. speeds, accelerations decelerations or directions) recorded by sensors at different times. Driving statistics 123 can also include information about the driving environment at different times, including, for example: routes (including start and destination locations), MPOIs. road conditions, weather, etc.
Machine learning engine 122, based on driving statistics, generates or trains rules, algorithms and/or prediction models 124. These can be used for various purposes. In one embodiment of algorithms 124, an algorithm may be used to detect a moving object using sensor data from the various sensors mounted to an ADV, and predict the movement or movement tendency of the object, based on the current state of an ADV (e.g. speed, relative position). The algorithms 124 can include a cost function or algorithm to calculate centripetal speed acceleration costs and collision costs of trajectory candidates to calculate the cost of possible paths that a moving object could move. The algorithms 124 are then uploaded to ADVs for real-time use during autonomous driving.Click here to view the patent on Google Patents.