Autonomous Vehicles – David Doria, Xin Chen, Coby Spolin Parker, Zichen Li, Here Global BV

Abstract for “Automatic detection of occlusion in road network data”

The present inventions allow for automatic detection of the severity and location of occluded areas within input data. From a data set, a grid representation of a scene can be generated. This allows for the classification of spaces within the grid as either free, occupied or hidden/occluded. To identify the occluded areas, the grid is bound.

Background for “Automatic detection of occlusion in road network data”

“Autonomous driving and augmented reality, as well as navigation applications, often rely on three-dimensional (3D), object detection based upon a 3D map of a local scene or model. Autonomous vehicles can sense the environment around them and match it to the 3D map. This is called localization. Localization relies on relevant objects, structures, and other localization objects within the vehicle environment that are present in the 3D maps.

Gaps in 3D maps, such as structures, missing objects and/or other localization objects can be detrimental to the performance or localization algorithms. Gaps in a 3D map are often due to false negatives. Object detection algorithms often produce gaps in the 3D map due to missing data. One reason for missing input data, such as LIDAR shadows or light detection and ranging (LIDAR), occlusion is one example. For example, the LIDAR scanner will only capture data of the semi-truck. The semi-truck is the only object captured by the LIDAR scanner, which results in a LIDAR obstruction and false negatives for any roadside objects.

“One embodiment provides a method to detect an occlusion using point cloud data. This method involves receiving point cloud data from a server and creating a grid representation from the server of the region of interest. The grid representation can include free, occupied and hidden space. This method can also be used to identify an obstruction in the area of interest using the hidden space provided by the server.

“An apparatus for detecting an obstruction from light detection and range (LIDAR), data is also provided in another embodiment. The apparatus comprises at least one LIDAR sensor and at most one processor. It also includes at least 1 memory that contains computer program code for at least one program. The processor is used to send LIDAR data to the apparatus for a particular region. The LIDAR data are then compiled into a spatial data structure as point clouds. The computer program code and memory are also set up to allow the apparatus to create a grid representation for the region of interest. This is done by evaluating the spatial structure to identify squares/cubes/voxels as free, occupied, or hidden squares. This allows the LIDAR data to be used to detect an obstruction in the LIDAR data, and generate a localization model based on the LIDAR data.

“Another embodiment provides a non-transitory computer-readable medium that includes instructions that, when executed, are operable to receive scene sensor data, which comprises an origin point or an end point. The sensor data can then be used to generate a grid to represent the scene. To generate the grid, one must trace a path from an origin point to an end point in order to identify hidden space, occupied space, and free space. Instructions also cover identifying objects within the scene from the occupied area and identifying an obstruction comprising a false positive in the objects detected from the hidden space.

“The current embodiments automatically detect the severity and location of occluded areas within input data. The detected occlusions can be used to indicate if additional data is needed to augment the input data or to report a gap in the input data. The resulting 3D model/map based on the input data can then be more precise and complete by detecting occlusions.

“For example, by using a light detection ranging (LIDAR), a grid representation is generated. Ray tracing (e.g. traversal) uses LIDAR data points (e.g. to track and identify spaces in the grid representation as hidden/occluded, occupied, or free). The grid is then bound, such as by using a lane-model for a roadway. Only a small portion of the grid representation is kept (e.g. space occupancy information within the roadway). The grid is then bounded in this example to identify obstructions caused by temporary objects blocking a roadside object’s view using a LIDAR scanner. The connected component analysis of the lane model is performed to identify connected regions of hidden spaces. Finally, thresholding the resulting connected areas of hidden space by size is done to indicate LIDAR obstructions.

“In order to generate a 3D vehicle location map/model, data collection devices are usually deployed using rooftop mounted scanners/sensors, such as LIDAR/laser, laser, Doppler or camera. For collecting 3D data to map/model objects near a roadway. Roadside objects are objects visible to vehicles on the roadway. These objects could include light poles and signs, guard rails, bridges or guard rails. While traveling on the roadway, the data collection vehicle captures 3D data about a specific region. The area of interest is usually bounded by distances (e.g. 15 meters beyond the roadway boundary). The vehicle localization model is then generated using object detection algorithms. The vehicle localization model can be used to locate a vehicle on the roadway (e.g., an auto- or mobile vehicle), by matching vehicle sensor data with the localization model. The ideal scenario is for the data collection vehicle to travel alone on the roadway without any traffic or other temporary objects, giving an unobstructed view over the entire roadway. In reality, traffic and other temporary objects can obscure the view of the data collection vehicle.

“FIG. “FIG. The perspective camera image shows the view from the vehicle’s perspective, along with sensor data collection 402 at the left roadway boundary. Sensor data collection 404 is at the right roadway boundary. The perspective camera image shows the large tanker truck 406 which will cause an obstruction in the sensor data collected from the data collection vehicle. (The occlusion can be seen in FIG. As discussed below, FIG. 2 shows the occlusion caused by a large tanker truck 406 (the occlusion is shown in FIG. ”

“FIG. “FIG. 2” illustrates an example occlusion in sensor information collected by the data collector vehicle. The left lane triangles indicate the direction of the vehicle that is traveling along the roadway. The sensor data collected from the vehicle by the data collector includes a substantial data set for data capture 402 of the left roadway border (i.e. no traffic obstructions) as well as a partial set for data collect 404 of right roadway boundary. FIG. FIG. 2 shows a gap in data set due to the large tanker truck 406 blocking the view of sensors on the data collection vehicle. The large tanker truck 406 will obscure the view of the sensors on the data collection vehicle and a 3D model of vehicle location/location created from this data set will not be complete. A false negative may be generated by object detection algorithms, indicating that there are no roadside objects in the area covered by the tanker truck 406. This could lead to a mapmaking error.

“FIG. FIG. 3 shows an embodiment of detecting an obstruction using sensor data from the data collection vehicle. To detect occlusions, an algorithm can be applied to sensor data (e.g. LIDAR data). A spatial data structure 502 is used to store the sensor data. Each data point that was collected by the data collector is included in the spatial data structure 502. Typically, the sensor data is three-dimensional. Sensor data can be of any N dimensions (e.g. N>2).

“For instance, the spatial data structure uses all the data points collected in the vehicle area to construct a probabilistic octree. You can also use other spatial data structures, such as the kd-tree or binary space partitioning (BSP)-tree. A spatial data structure such as a probabilistic or BSP-tree allows for intelligently representing sensor data, rather than just randomly representing it (e.g. in a random set of points). A spatial data structure is a way to organize sensor data by using information from other points. This allows sensor data points to be organized with respect to other points. For example, sensor points can be grouped by grouping them (e.g. by binning data) according to how close they are to other points (e.g. points within 10 cm). Sensor data points can also be grouped according to their location in a grid. Each sensor data point is then binned according the grid location. The sensor data points can be searched by reference to a grid position or by searching for data point in relation to another grid location (e.g. all points within 10cm of a grid place). You may also use other methods of grouping/binning. You can also use data dependent methods to group/bin data points, such as dividing the total points equally, or assigning weights.

Referring to FIG. “Referring to FIG. An embodiment of the invention uses sensor data to crop/clip based on boundary conditions or a region. When constructing a localization map for a highway, there are often occlusions due to traffic or other temporary objects. The localization model is created to capture stationary roadside objects such as signs, buildings, guardrails, and so on. This means objects located within 15 meters of the road boundary. The highway’s boundaries may define a region of concern for highway occlusions. You can also define other regions of interest based on different boundary conditions.

“As shown at 504, the boundary conditions are applied for the sensor data. To crop/clip sensor data within a roadway, it is possible to use known lane boundaries. Roadway boundary conditions can be applied to a highway lane model. To limit the areas where the algorithm can detect occlusions, it may be necessary to crop/clip sensor data. For temporary objects, additional and different boundary conditions can be applied.

A grid 504 can be generated by using the cropped/clipped sensor information. A cropped Boolean grid, for example, is made from data left after a region is identified and applied. Grid 504 shows cells with data points, after cutting/cropping the sensor data 502. Grid 504 shows cells with data points after cropping/clipping the sensor data 502. The grid’s right side shows blank spaces, which indicates possible occlusions in those locations, such as the tanker truck in FIG. 1.”

Based on sensor data, each grid cell/space can be classified as either free, occupied, hidden/unseen. Free spaces can be described as areas of roadway that are free from any permanent or temporary objects. Occupied spaces can be defined as areas of roadway that have a permanent or temporary object. Hidden/unseen areas may be designated as sections of the road that are not tagged with sensor data. You can use additional or different grid cell/space definitions. Each grid cell/space can be described as traversed or unstraveled, hidden or nothidden, etc. Based on sensor data.

“Applying the boundary conditions discussed above constrains the interpretation. Without boundary conditions, such as a lane model, hidden spaces can extend beyond the detection of objects. They also will be extended beyond the sensor range (e.g. beyond the 50-meter range of an LIDAR scanner). The boundary conditions limit the interpretation of hidden spaces to an area that is of interest. For example, occlusions that prevent a data collection vehicle capturing roadside objects.

“In this example, traversing/tracing LIDAR Rays through the region is used (as discussed below), to identify each cell/space in grid 504 as free/occupied, hidden/unseen/unseen. Sensor data is used as a point cloud, such LIDAR point clouds data, etc. Data points between a sensor and a detected object can be lost or ignored. Free space modeling can be used to generate additional data points for locations between sensors and detected objects. Data points between the sensor’s detector and detected objects are considered to have no objects. You can also use other interpretations of sensor data to determine grid cells/spaces for grid 504.

“FIG. 4. illustrates an embodiment for ray traversal of a sensor data points. FIG. FIG. 4 shows a section of a three-dimensional grid measuring two voxels in height, eight voxels in width and one voxel depth. From a sensor origin 602 up to the end point 604 a sensor ray is traced. The sensor ray traverses from a sensor origin 602 to an end point 604. It is a temporary object (e.g. an occlusion). Or a permanent object (e.g. an object to include into the localization model). You can trace the ray by overlaying it from the sensor origin 602 on the grid. Information gained through traversing the Ray may also be used to characterize grid spaces. Grid spaces between sensor origin 602 (or end point 604), such as free space 606, have been determined to be unobstructed by objects. This is because any object could block the ray’s path to end point 604. A grid space that corresponds to the sensor source 602 could also be considered a free space, as it may not contain any objects. This is because the sensor origin might assume that an object cannot be found in the same space as the sensor. Untraversed spaces 608 are not allowed to contain information.

By traversing the Ray, you can define free spaces (e.g. cells traversed between sensor 602 and sensor 604, and the cells containing sensor 602), an untraversed cell 608, and hidden/unseen space (e.g. the cell containing the end 604). The spatial data structure may store the untraversed and free spaces 606 in order to generate grid 504. A probabilistic octree, for example, assigns probabilities to untraversed space 608 and free spaces 606 respectively. These probabilities can be used to determine if a grid space in grid 504 is free, occupied, hidden/unseen, or both.

“Probabilities can be assigned based on multiple characterizations made of the same location using sensor data. One example is that locations with at least one free, unoccupied or binary 0 probability of hiding space are assigned a zero percent probability. All locations without data points are assigned a hundred percent probability (i.e. 100% or a Binary 1) probability. Alternately, different probability levels can be assigned to different locations. Different probability levels may be assigned to different locations depending on whether they were not traversed or if there are fewer data points at the location. For example, a single-digit number of datapoints might be used instead of a hundred, which would indicate a different order or magnitude of data point. Different probability levels can be assigned depending on how conservative a localization model defines occlusions. If a threshold number is set for determining probability levels (e.g. at least 100 data points), then locations that are not traversed more than this threshold are considered inconclusive. They are given a hidden space probability. If a location is only traversed 3 times, it will be assigned a probability 97% (or 97/100) of hiding space. Locations that are not reached the threshold are considered inconclusive and given a 100% probability of hiding space. You may also use other metrics to assign hidden space probabilities.

Referring back to FIG. “In an embodiment, referring back to FIG. The 504 diagram shows the thresholding process. The hidden space is designated as occluded areas after it has been thresholded. Grid squares can be used for a two-dimensional grid. Grid cubes and voxels can be used to create a three-dimensional grid.

“At 506, there are two occluded areas that have been labeled. An embodiment may analyze the spatial data structure to identify and label the obstructions. This information could include the source of the obstruction, temporary status, etc. Based on the occluded areas, a map database (e.g. the localization model), may be updated. Two occluded areas 406, 408 and 506 are listed. These two regions correspond to the 406 tanker truck shown in FIG. 1. The occluded areas are, for example, identified and labeled using artifacts from an octree such as large hidden spaces in a region of interest. The hidden space can be used to determine if the occlusion was present during data collection.

“Connected component analysis is used to analyze hidden spaces in an embodiment. Connected component analysis, an image processing technique that analyzes hidden spaces based on a specific similarity metric, is used to grow a region from a seed point up until a stopping criteria has been met. To identify hidden spaces, the seed point 406, 408 is used in this embodiment. Large sections of hidden space can be identified and thresholded by disjointed sets voxels. In 506, the occluded area 406 is identified to be a connected component, while the occluded regions 408 and 408 are identified as connected components. As a third component, the free space 410 could also be identified. These connected components can be used to identify occlusion sources.

“For example: shape analysis, statistical analysis etc. To determine the cause of the hidden spaces, it may be possible to perform shape analysis or statistical analysis. Analyzing the occluded areas can help to identify the source of the obstruction. The occluded areas 406, 408 are examples of a truck’s design. The truck’s cab shape is represented by the occluded area 406 and the truck body shape by the occluded area 408. You can also identify other patterns based on the types of occlusions (e.g. walls, barriers and construction signs). Analysing the occluded areas 406, 408 may also identify the occluded region 406, 408 as traffic, permanent occlusions or construction occlusions.

“In one embodiment, the analysis of the occlusions could drive a decision about whether to collect additional sensor data or map an occlusion site. If the source of an obstruction is temporary, such as a truck 406, then targeted mapping may be done by collecting additional sensor data to determine the locations 406, 408. Targeted remapping might not work if the source of the obstruction is a permanent object such as a wall. After analyzing the occlusion one or more actions can be taken, including the incorporation of supplemental data and updating the map database with false negatives (e.g., instructions to proceed with caution or reporting that there is no data for this location).

The present inventions can significantly improve mapmaking by automatically detecting sensor data occlusions. False negatives can be reduced or eliminated by detecting them, improving the accuracy and completeness the final map. Additional data can be integrated into the mapmaking process by efficiently detecting and analysing occlusions. This may include targeted remapping and the integration of additional data and/or labeling regions on the map as potentially false positive areas.

The present inventions can also significantly improve activities that rely on a generated map to localize, such as autonomous driving and navigation. A map that is more accurate due to detecting occlusions can be used by autonomous vehicles, navigation devices, and/or augmented reality devices to locate vehicles and/or devices. This will increase safety, accuracy, and provide a better user experience. An autonomous vehicle or a mobile device may detect occlusions in real time, which allows for better localization and better match of objects to the map. An autonomous vehicle or mobile device can detect occlusions in real time, which allows it to identify temporary and permanent objects in a scene. This will allow the vehicle/mobile device to determine a location more accurately, provide a better user interface, and so on. Augmented reality applications can also be enhanced by better understanding the scene around it by detecting occlusions.

“FIG. 5. This is an example flowchart to detect an obstruction from point cloud data. The FIGS. 6-10, which is discussed below, and/or another system. Referring to FIG. Referring to FIG. 6 (discussed above), the method can be performed by a server.125 and 121. An autonomous vehicle, or another mobile device may collect the point cloud data. For example, mobile device 122 can be used as probe 131. You may also receive additional, different or fewer actions. This is the method as shown. You may provide other orders and/or perform additional acts in parallel.

Point cloud data may be used to detect occlusions for mapmaking. Two stages, or layers, of mapmaking can be performed in an embodiment. To map roads, for example, a lane model can be generated to identify and map the painted lines along the roadway. To identify roadside objects, a localization model is generated. Occlusions can be detected at both the mapmaking and mapping stages. Each stage defines different regions of interest.

“Referring to FIG. 5. At act 701, point clouds data are captured or received. The point cloud data can be received by a data collection vehicle (e.g., a mobile device 122) and transferred to a cloud-based server 125 via a network 127. The point cloud data may be stored in a database 123 by the cloud-based server 125. Referring to FIG. Referring to FIG. 7, the data collection vehicle might employ a distance detector 209, such as a LIDAR scanner. An embodiment of the data collection vehicle has a plurality laser and receivers that are deployed in different directions. For example, there may be thirty-two lasers that tilt in different directions and angles. Alternately, the distance detector 209 could be one or more cameras or sensors that capture point cloud data. The server 125 stores the point cloud data in a spatial structure such as a probabilistic Octree. An embodiment combines the complex geometries from LIDAR sensor data (i.e. caused by vehicle movement and spinning sensors) into an octree representation.

Referring to FIG. “Referring back to FIG. Grid representations are generated by server 125 from point cloud data stored in the database 123. Any boundary conditions that are specific to the scene will be used as the basis for the region of interest. For example, a traffic lane model of the roadway is used to generate a localization model. This database 123 server stores the traffic lane model. The grid is created in an embodiment by using point cloud data to identify free, occupied, and hidden spaces. The free spaces correspond to locations between each sensor and an object location identified by the sensor. The occupied spaces corresponds to object locations identified by, while the hidden spaces corresponds to locations without any sensor data. Analyzing the point cloud data using probabilistic octree representation, you can identify the hidden, occupied, and free spaces. Three LIDAR rays could have inconsistent data, for example. The probabilistic octree consolidates all data and assigns an occupancy probability for each location. If the point cloud data contains three points for the same place, one is free and two are occupied, the probabilistic octree assigns a probability of occupancy for each location at sixty six percent. Probability as free and probability as occupied are different. Probability as occupied can be assigned to this location. To generate the grid representation of each location, probabilities may be used. For example, threshold probabilities can be applied to each space to determine its character.

“At act 705, an obstruction is identified in the area of interest. Based on the grid representation’s hidden spaces, the developer system 121 of server 125 determines the location of the occlusion. It is possible to identify the location of the obstruction and, in some embodiments, also the severity of the obstruction. The severity of an occlusion could include its size or completeness. A small, temporary object such as a vehicle or small car in the area of the data collection vehicle may cause partial obstruction. Although the partial obstruction may not completely obscure the view of roadside objects, it may reduce the sampling rate by data collection vehicles (e.g. no full gaps in sensor information, but a lower sampling speed due to some rays hitting small vehicles and roadside objects). A full occlusion could be described as the entire size and shape of a tanker truck 406 completely blocking all roadside objects. There may also be intermediate levels of occlusion.

“Act 707 defines the occlusion by the developer system 121, which is based on the form of the hidden spaces. The connected component analysis of hidden spaces determines the hidden space shape. This is how the occlusion can be classified as either a temporary or permanent. You may also need to determine the source of the obstruction.

“At act 709, a request is made for additional point cloud data for the location of the occlusion. After identifying the occlusion to be temporary, the request is generated using the developer system 121 on the server 125. Alternately, or in addition, act 711 updates the map database 123 on the server 125 based on the occlusion data and/or the supplemental point clouds data.

“Act 713 provides an updated map database123 to an autonomous vehicle’s map database 122, which is used for locating the vehicle within the scene. The updated map database 133, which is used to sense the environment around the vehicle, matches it with the updated database 133. The autonomous vehicle can sense roadside structures and objects, and match them to the updated map database to determine the location of the vehicle. Once the location is determined, the autonomous vehicle can navigate the vehicle and then control it.

“FIG. FIG. 6 shows an example system 120 to update the map database. FIG. FIG. 6. One or more mobile devices 122 include probes 131, and are connected to server 125 through the network 127. The server 125 also has a database 123 that includes the server map. The server map can include a lane and localization model. A developer system 121 is composed of the database 123, and the server 125. Based on the data collected by the probes 131, the developer system 121 generates server map updates. Multiple mobile devices 122 can be connected to the server via the network 127. Mobile devices 122 can be used as probes or be combined with probes. The probes 131 gather point cloud data that is used by the development system 121 to update server maps. Mobile devices 122 could also include autonomous vehicles. Databases 133 that correspond to device maps are included in the mobile devices 122. Based on the server map, the device maps can be updated. You may include additional, different or fewer components.

“For example, point clouds are collected by data collection vehicle and/or mobile devices 122 to update server map at server 125. Vehicle maps are not updated directly by the data collection vehicles or other mobile devices 122. Instead, data collection vehicles and/or mobile devices 122 collect sensor data using probes 131, and then upload it to the server 125. The server 125 provides periodic map updates to data collection vehicles and/or mobile devices 122. This is done by aggregating sensor data from mobile devices 122, and updating the server map with the aggregated sensor data. Map updates are sent to autonomous vehicles, navigation devices and augmented reality devices for the purpose of localizing them in an environment that is based on a local database 143.

“The mobile device number 122 could be a personal navigation system (?PND?) “The mobile device 122 may be a personal navigation device (?PND?). ), a watch or a tablet computer, a notebook or any other mobile device or personal computer that is known or later developed. A mobile device 122 could also include an automobile head unit, infotainment or any other automotive navigation system. Other non-limiting examples of navigation devices include mobile phones, infotainment systems, and navigation devices for air and water travel.

“Communication between mobile devices 122, and the server 125 via the network 127 can use a variety different types of wireless networks. Wireless networks can include cellular networks. These networks may use the IEEE 802.11 family of protocols, Bluetooth family of protocols, and other protocols. Cellular technologies can include the analog advanced mobile phone system, AMPS, third generation partnership project (3GPP), code Division Multiple Access (CDMA), personal handyphone system (PHS), 4G or long-term evolution (LTE), or any other protocol.

“FIG. “FIG. 6. The mobile device 122 can be used as an autonomous vehicle or augmented reality device. The mobile device 122 contains a processor 200 and a map database, 143, memory 204, an input device, 203, and position circuitry, 207. A distance detector 209, display 211, camera 213. The mobile device 122 can have additional, different or fewer components.

“The distance detector 209 can receive sensor data that indicates roadside objects such as signs, light poles and guard rails. The distance detector 209 can emit a signal and then detect a return signal. The signal could be either a laser signal or a radio signal. The distance detector 209 can determine a vector (distance or heading) between the location of the mobile device 122 and the position on the roadside objects. The camera 213 can also be set up to detect roadside objects. The camera 213 can collect images to calculate the distance to roadside objects. One example is the number of pixels, or the relative size of roadside objects. This indicates the distance. Lesser objects on the roadside are closer. Another example is the relative distance between two or more images. If two images are taken at a certain distance, for example, the relative differences in roadside objects indicate the distance between them.

“The position detector (or position circuitry 207) is used to determine the geographic position of mobile device 122 and detected roadside objects. The vehicle’s position during collection of roadside object sensor data may determine the geographic position. This information is used by server 125 for updating map database 123. The processor 200 may also use the geographical position to match roadside objects detected using the distance detector 209 and map database 143 during localization.

“The positioning circuitry (207) may include suitable sensing units that measure the distance, speed, direction, etc. of the mobile device 122. A receiver and a correlation chip may be included in the positioning system to receive a GPS signal. Alternately, or in addition, one or more sensors or detectors may also include an accelerometer or a magnetic sensor embedded within or into the mobile device’s interior 122. The accelerometer can detect, recognize or measure the rate at which the mobile device’s translational and/or rotateal movements change 122. The magnetic sensor or a compass is designed to provide data indicative of the heading of the mobile phone 122. The orientation of the mobile phone may be indicated by data from the magnet sensor and the accelerometer 122. The positioning system sends data to the mobile device 122. The location data indicates where the mobile device is located 122.

“The positioning circuitry 207 could include a Global Navigation Satellite System, (GPS), or a cellular position sensor to provide location data. The positioning system can use GPS-type technology, dead reckoning-type systems, cellular locations, or combinations thereof. The positioning circuitry (207) may also include suitable sensors that measure the distance, speed and direction of the mobile device 122. A receiver and a correlation chip may be included in the positioning system to receive a GPS signal. The positioning system sends data to the mobile device 122. The location data indicates where the mobile device is located 122.

“The position circuitry (207) may also include magnetometers and accelerometers or any other device that tracks or determines movement of a mobile phone. The gyroscope can detect, recognize, and measure the current or future orientations of a mobile device. The gyroscope orientation detection can be used to measure the yaw and pitch of the mobile device.

“The mobile device 122 could be integrated into a vehicle that may include a data collection vessel, assisted driving vehicles such a autonomous vehicle, highly assisted driving (HAD), or advanced driving assistance systems (ADAS). Mobile device 122 may include any of these assist driving systems. An assisted driving device can also be integrated into the vehicle. An assisted driving device could include memory, a processor and systems to communicate directly with the mobile device (122).

“Autonomous vehicle” may be used to refer to a self driving or driverless vehicle in which there are no passengers. An autonomous vehicle can also be called a robot vehicle, or an automated vehicle. An autonomous vehicle can carry passengers but no driver is required. These autonomous vehicles can park themselves and move cargo around without the need for a human operator. Autonomous vehicles can have multiple modes, and may transition between them. Based on vehicle database 133, including the road object attribute, the autonomous vehicle can steer, brake, and accelerate the vehicle. The vehicle’s environment is detected by distance detector 209 which matches the environment to the map database 143. To match the environment to the map database, the autonomous vehicle can also use position circuitry (207). The autonomous vehicle might sense roadside objects and match them to the map database. This could allow it to determine the position and orientation of its autonomous vehicle. The system can then navigate and control the vehicle once it has determined its position and orientation.

A vehicle that is highly assisted in driving (HAD), may not fully replace the human operator. In a highly assisted driving mode, however, the vehicle can perform certain driving functions while the human operator may take over some of the driving duties. A manual mode allows the operator to have some control over vehicle movement. There may be a completely autonomous mode. There are other levels of automation. HAD vehicles can control the vehicle by steering or braking, depending on the vehicle database 133 and the road object attribute. The HAD vehicle detects the environment around the vehicle with distance detector 209 and matches it to the map database 143. To match the environment to the map database, the HAD vehicle can also use position circuitry (207). The HAD vehicle can sense roadside objects and match them to the map database to determine the position and orientation. The HAD vehicle can then navigate and control the vehicle once it has determined its position and orientation.

“ADAS vehicles also include partially automated systems that alert the driver. These features can be used to prevent collisions. Some features include adaptive cruise control, automatic braking, and steering adjustments to ensure the driver stays in the right lane. ADAS vehicles can issue warnings to the driver based upon the traffic estimation level for a current or upcoming link based 133 vehicle database including the road object attribute. The ADAS vehicle detects the environment around it using distance detector 209, and matches it to the map database 143. To match the environment to the map database 143, the ADAS vehicle can also use position circuitry 207. The ADAS device may sense roadside objects and match them to the map database to determine the vehicle’s position and orientation. The ADAS device can then navigate and control the vehicle once it has determined its position and orientation.

The vehicle database 133 may be used to generate a routing instructions by the mobile device 122. The mobile device 122 can be set up to execute routing algorithms that determine the best route to follow along a road network, from an origin location to a location in a geographical region. A mobile device 122 analyzes possible routes between the origin and destination locations using inputs such as map matching values from server 125 to determine the best route. The navigation device 122 may provide information to the user about the best route. This guidance will identify the steps that must be followed to get from the origin location to the destination. A few mobile devices 122 display detailed maps that show the route and the type of maneuvers required at different locations along the route. They also indicate the locations of certain features.

“The mobile device 122 may plan a route through an existing road system or modify an existing route through the road system using the matched probe data. The route can be extended from the current location of the mobile device, or an origin, to a destination via the road segment matched by the probe data. The Dijkstra method, A-star search or algorithm, and/or any other route exploration or calculation algorithms may be used to calculate possible routes. However, it is possible to adjust the cost values for the underlying road segments. Other aspects such as distance, navigational difficulties, and restrictions may also be taken into consideration when determining the optimal route.

“The controller 200 or processor 300 could include a general processor or digital signal processor, an app specific integrated circuit (ASIC), a field programmable gate array, (FPGA), analog circuit or digital circuit, or combinations thereof or any other currently known or later developed processor. Controller 200 and/or 800 can be one device or a combination of devices such as those that are associated with a network, distributed computing, or cloud computing.

“The memory 204 or memory 301 can be either a volatile memory (RAM), random access memory (RAM), or a nonvolatile memory. Memory 204 and/or 301 could include one or more read-only memory (ROM), random memory (RAM), flash memory, electronic erasable programme read only (EEPROM), or another type of memory. The memory 204 or memory 801 can be removed from the mobile device 122, such as a secure digital memory card (SD)).

“The communication interface (205/305) may include any operable link. An operable connection can be any one that allows signals, physical communications, or logical communications to be sent and/or taken. An operable connection can include a physical, electrical, and/or data interface. The communication interface (205) and/or the communication interface (305) allow for wired and/or wireless communications in any format, now known or later developed.

“The databases 123-133, 143 and 143. Geographic data may be used for traffic-related and/or navigation-related purposes. Geographic data can include data that represents a road network, including node and road segment data. Road segment data are roads. Node data is the intersections or ends of roads. Road segment data and node data are used to indicate the locations of roads and intersections, as well as other attributes. Geographic data may also be available in other formats than nodes and road segments. Geographic data can include structured cartographic data and pedestrian routes.

The databases could also contain other attributes about or about roads, such as street names, addresses ranges and speed limits, and/or turn restrictions at intersections (e.g. one or more road segments are part of a highway/tollway, or the location of stop signs or stoplights along the road segments), and points of interest (POIs), like gasoline stations, restaurants, hotels, museums, stadiums and offices, automobile dealers, auto repair shops and buildings, stores, parks, and so on. One or more node records may be included in the databases. These may include attributes (e.g. about intersections), such as geographic coordinates and street names, address ranges and speed limits at intersections. Other navigational attributes may also be present, such as POIs like gasoline stations, hotels, restaurants museums, stadiums offices, automobile dealers, auto repair shops, buildings stores, parks, etc. Other data records may be added or removed from the geographic data. These include POI data records and topographical data records as well as cartographic data records and routing data.

The databases could contain historical traffic speed data for one or several road segments. Traffic attributes may be included in the databases for some or all of the road segments. Traffic attributes can indicate whether a road segment is likely to be congested.

“The input device203 could be one or more of the following: keypad, keyboard or mouse, trackball, rocker switch or touch pad, voice recognition circuit or any other device or component that allows data to be entered into the mobile device 122. Combining the input device 203 with display 211 can create a touch screen that may be capacitive, resistive, or both. Display 211 could be either a liquid crystal display panel (LCD), light emitting diode screen (LED) or thin film transistor screen. Audio capabilities or speakers may be included in the output interface 211. An embodiment of the input device 203 could include a device with velocity detection capabilities.

“Computer-readable medium” is a term that refers to a computer-readable medium. A single medium can be used as well as multiple media such a distributed or centralized database and/or associated caches or servers that store one or several sets of instructions. Computer-readable medium is also used. Any medium capable of storing or encoding instructions that can be executed by a processor, or that causes a computer system perform any of the operations or methods described herein shall also be included.

“In an exemplary, non-limiting embodiment, the computer-readable media can contain a solid-state storage device such as a memory card, or any other packaging that contains one or more nonvolatile read only memories. The computer-readable medium may also include a random access or volatile re-writable storage. The computer-readable medium may also include a magnetooptical or optical medium such as disks, tapes, or any other storage device that captures carrier wave signals, such as those transmitted over a transmission medium. The disclosure can be interpreted as including any or all of the following: a distribution medium, a computer-readable medium, and equivalents, and successor media in which data and instructions might be stored.

“In another embodiment, dedicated hardware implementations such as application-specific integrated circuits, programmeable logic arrays and others can be built to implement any of the methods described. The apparatus and systems described herein can be used in a wide variety of applications. A few of the embodiments described herein can implement functions using interconnected hardware modules or devices that have related control and data signals. These signals can be transmitted between and through modules or as parts of an application-specific integrated Circuit.

“The methods described in the present disclosure may be implemented using software programs that can be run on a computer system. In an exemplary embodiment, parallel processing is possible. Virtual computer system processing is also possible to implement any of the methods and functionality described herein.

“The present specification describes components, functions that can be implemented in specific embodiments using particular protocols and standards. However, the invention isn’t limited to these protocols and standards. Examples of the state-of-the art include standards for Internet and other packet switching network transmissions (e.g. TCP/IP/IP UDP/IP HTML, HTTP, HTTPS). These standards are often superseded periodically by faster or more efficient alternatives with essentially the same functions. Consequently, replacement standards or protocols with the same functions or similar functions to those described herein are considered equivalents.

Computer programs can also be called software, script, code, program, software, application, script or code. They can be written in any programming language, including compiled and interpreted languages. A computer program is not always associated with a file within a file system. You can store a program in any part of a file that contains other programs or data. You can deploy a computer program to run on one computer, or on multiple computers located at the same site.

“The process and logic flows described here can be executed by one or more programmable CPUs. These processors execute one or more computer programs that perform functions, such as operating on input data or generating output. These processes and logic flows are also possible to be executed by special purpose logic circuitry.

“As used here, the term “circuitry” is or ?circuit? refers to all of the following: (a) hardware-only circuit implementations (such as implementations in only analog and/or digital circuitry) and (b) to combinations of circuits and software (and/or firmware), such as (as applicable): (i) to a combination of processor(s) or (ii) to portions of processor(s)/software (including digital signal processor(s)), software, and memory(ies) that work together to cause an apparatus, such as a mobile phone or server, to perform various functions) and (c) to circuits, such as a microprocessor(s) or a portion of a microprocessor(s), that require software or firmware for operation, even if the software or firmware is not physically present.”

“This definition of circuitry? This applies to all claims and uses of the term “circuitry” in this application. The term “circuitry” is used in this application as well. This would include the implementation of just a processor or multiple processors, or a portion thereof and any accompanying software and/or firmware. “circuitry” is a broad term. The term “circuitry” would also include, depending on the claim element, a baseband or applications processor integrated circuit for mobile phones or similar integrated circuits in servers, cellular network devices, or any other network device.

For the execution of computer programs, processors are suitable for both general and specific purpose microprocessors. They can also be used to run any number of processors from any type of digital computer. A processor can receive instructions and data from either a random access or read-only memory. A processor is responsible for performing instructions. There are also one or more memory devices that store instructions and data. A computer usually includes one or more mass storage devices to store data. These devices are not required for a computer. A computer can also be embedded in other devices, such as a mobile phone, a personal assistant (PDA), mobile audio player, and a Global Positioning System receiver (GPS). Computer-readable media are suitable for storing program instructions and data. They include all types of non-volatile memory media, media, and memory devices. Special purpose logic circuitry can be added to the processor or integrated into the memory. A vehicle can be considered a mobile device in one embodiment. Alternatively, the mobile device could be integrated into a vehicle.

“To allow interaction with a person, the embodiments described in this specification may be implemented on a device that has a display. For example, a CRT or LCD monitor can display information to the user. A keyboard and a pointing device (e.g. a trackball or mouse) can also be used to provide input to the computer. You can also use other devices to interact with the user. For example, feedback can be provided in visual, auditory, tactile, and/or audio form to the computer. Input from the user can come in any format, such as speech, acoustic, or tactile.

“Computer-readable medium” is a term that refers to a computer-readable medium. A single medium can be used as well as multiple media such a distributed or centralized database and/or associated caches or servers that store one or several sets of instructions. Computer-readable medium is also used. Any medium capable of storing or encoding instructions that can be executed by a processor, or that causes a computer system perform any of the operations or methods described herein shall also be included.

“In an exemplary, non-limiting embodiment, the computer-readable media can contain a solid-state storage device such as a memory card, or any other packaging that contains one or more nonvolatile read only memories. The computer-readable medium may also include a random access or volatile re-writable storage. The computer-readable medium may also include a magnetooptical or optical medium such as disks, tapes, or any other storage device that captures carrier wave signals, such as those transmitted over a transmission medium. The disclosure can be viewed as including any combination of a computer-readable medium, a distribution medium, and other equivalents or successor media in which data and instructions may be stored. These examples can be collectively called a non-transitory computer-readable medium.

“In another embodiment, dedicated hardware implementations such as application-specific integrated circuits, programmeable logic arrays and others can be built to implement any of the methods described. The apparatus and systems described herein can be used in a wide variety of applications. A few of the embodiments described herein can implement functions using interconnected hardware modules or devices that have related control and data signals. These signals can be transmitted between and through modules or as parts of an application-specific integrated Circuit.

“Embodiments of subject matter described here can be implemented in a computing device that has a back-end component, such as a data server or middleware component. Or a front-end component, such as a client computer with a graphical user interface, or a Web browser, through which a user can interact to an implementation of this subject matter. Any form of digital data communication can connect the components of this system, such as a communication network. A local area network (?LAN?) is one example of a communication network. A wide area network (?) and a local area network (?) and a wide area network (?WAN?)

“The computing system may include both clients and servers. Client and server are usually separated and interact via a communication network. Computer programs on each computer create a client-server relationship.

“In an embodiment, data are captured and uploaded to cloud storage in order to perform offline analysis. An example of this is offline analysis that may be performed months after data was collected in a mapping process. Analyses can also be done in real time to obtain a real-time description of an obstruction in the area of interest. LIDAR data is collected from a LIDAR sensor in a region of interest and sent to a cloud server. LIDAR data are assembled in point cloud data within a spatial structure. A grid representation of the area of interest is created by evaluating this spatial data structure. Each square in the grid can be classified as either a free, occupied, or hidden square. To characterize each square, one can trace LIDAR rays between the sensor origin points and the corresponding end points. The locations between the sensor origin point and the corresponding ends points are the free squares. The occupied squares correspond with locations associated to the end points. While the hidden squares corresponds to untraced locations, the occupied squares corresponds to the locations of the end points. A LIDAR data can detect an occlusion based on hidden squares in grid representation. A localization model could be generated using the LIDAR and detected occlusion.

“FIG. “FIG. 6. The server 125 contains a processor 300 and a communication interface305. A memory 301 and a database123. To enter settings into the server 125, an input device (e.g. keyboard or personal computer), may be used. The server map may be included in the database 123, including a lane and localization model. The server 125 may contain additional, different or fewer components. FIGS. FIGS. 5 and 11 are examples of flowcharts that illustrate the operation of server125. You may also provide additional, different or fewer actions.

“The geographic database123 contains a lane model as well as a localization model. The lane model contains a map showing the painted lines along a roadway. It also includes information about the boundaries and lanes of the roadway. The localization model contains a map showing roadside objects that extend beyond the roadway boundaries (e.g. 15 meters). The lane model or localization models can be either two-dimensional (2D), 3-dimensional (3D), or of another dimension (e.g., with n>3).

“The memory301 is designed to store roadside sensor data. The memory 301 can store temporary data from the database 123. Memory 301 stores portions of the database 123 for comparison with the sensor data received.

“The communication interface305 can receive sensor data from multiple data collection vehicles and/or smartphones. The communication interface 305 can also transmit map data such as lane and location model updates to mobiles devices and autonomous vehicles.

“The processor 300 can detect occlusions within the received sensor data as described above. Based on the detected occlusion, the geographic database 123 gets updated. The database 123 includes supplemental sensor data, such as data from targeted remapping, to help locate the occlusion. Alternately, or in addition, the database 123 can be updated to report the location. This could include a warning to proceed cautiously or a warning that unreported roadside items may exist.

“In FIG. FIG. 9 shows that the geographic database 123 could contain at least one road segment record 304 (also known as?entity or?entry?). 9 The geographic database 123 may contain at least one road segment database record 304 (also known as?entity? Each road segment within a specific geographic area. Local databases 133 may also be able to use any of the features in geographic database 123. A node database record 306 may be included in the geographic database 123 (or?entity?) The geographic database 123 may also include a node database record 306 (or?entity?). Each node within a specific geographic region. The terms “nodes” and “segments” are interchangeable. The terms?nodes? and?segments are used to describe these physical geographic features. These terms are used to describe the physical features of the geographic feature. Other terminology is not intended to be included within these concepts. Geographic database 123 could also contain location fingerprint data for specific geographic locations.

“The geographic database 123 could also include data 310. Other types of data 310 could represent different kinds of geographical features or any other type of data. Other types of data include POI (point of interest) data. The POI data could include, for example, POI records that contain a type (e.g. the type of POI such as restaurant or hotel, city halls, police stations, historical markers, ATMs, golf courses, etc. The location of the POI, phone number, hours of operation, and other details.

Indexes 314 are also included in the geographic database 123. Indexes 314 could include different types of indexes which relate data from different sources to one another or to other aspects of the data in the geographic database.123. The indexes 314, for example, may link the nodes of the node data record 306 to the end points in road segments in road segment data records 304. The indexes 314, for example, may link road object data 308 or road object attributes with road segments in the segment data record 304. An index 314 could, for instance, store data that relates to one or more locations of the road object attribute 308 at each location. Road object attribute 308 can describe the type and relative location of road objects, as well as the angle between road segments and road objects.

“The geographic database123 may also contain other attributes about or about roads, such as geographic coordinates and physical features (e.g. lakes, rivers, railroads etc.). street names, address ranges and speed limits at intersections. One or more node records 306 may be included in the geographic database 123. These records may contain attributes (e.g. about intersections), such as street names, address ranges and speed limits. Other attributes include attributes such as geographic coordinates, street names, intersections, speed limits and turn restrictions at intersections. Other data records may be added or removed from the geographic data 302, such as POI data records and topographical data records. Cartographic data records, routing records, and maneuver data are all possible options. The database 123 also contains information relevant to this invention, including temperature, altitude, elevation, lighting, sound and noise level, wind speed, magnetic interference or radio-and-micro-waves, cell tower, wi-fi and access points information and attributes that relate to specific approaches to a particular location.

“The geographic database123 could include historical traffic speed data from one or more road segments. Traffic attributes may be included in the geographic database 123 for some road segments. Traffic attributes can indicate whether a road segment is likely to be congested.

“FIG. “FIG. A segment ID 304(1) may be included in road segment data record. This allows the data record to be identified in geographic database 123. Each road segment data file 304 may include information associated with it (such as?attributes’,?fields’, etc.). This describes the characteristics of the road segment. Data 304(2) may be included in the road segment data record. This data indicates any restrictions on the direction of vehicular traffic on the segment. Data 304(3) may be included in the road segment data record. This data indicates a speed limit, or speed category, which is the maximum allowed vehicular speed on the segment. Road segment data record304 can also contain classification data 304(4) that indicates whether the road segment is part a controlled road (such as an expressway, ramp to controlled access roads, bridges, tunnels, toll roads, ferry, etc. The road segment data record can also include location fingerprint data. For example, a set or sensor data for a specific location.

“The geographic database may contain road segment data records (or entities) that describe features like road objects 304(5). Road objects 304(5) can be stored according to their location boundaries or vertices. Road objects 304(5) can be stored in a field or record with a range of values, such as 1 to 100 for type and size. You can store road objects in categories like low, medium, and high. You may also use additional schema to describe road objects. Attribute data can be stored in relation with a link/segment 306, a node 306, or a strand. It may also contain information about a location fingerprint, an area or a region. The settings and information for display preferences may be stored in the geographic database 123. A display may be connected to the geographic database 123. Displays can be set up to display data entities and the roadway network using different colors. For example, the geographic database 123 might display different information about open parking spots.

“The road segment data file 304 also contains data 304(7) which provides the geographical coordinates (e.g. latitude and longitude of the end points) for the road segment. Data 304(7) refers to node data records 306 which represent the nodes that correspond to the end points of the road segment.

“The road segment data file 304 could also contain or be associated with additional data 304(7) which refers to different attributes of the road segment. You may include all the attributes associated with a road section in one road segment record. Or, you may include them in multiple records that cross-reference to one another. Road segment data record 304 might include data such as the intersections at each node that correspond to the road section being represented, the names or names of the street segments, and the turn restrictions at each intersection.

“FIG. 10. also displays some components of a 306 node data record that could be found in the geographic database. 123. Each node data record 306 could have associated information, such as?attributes’,?fields’, etc. This allows for identification of the road segments that connect to it, and/or their geographic location (e.g. its latitude/longitude coordinates). Node data records 306(1)(2), 306(2) contain the latitude/longitude coordinates 306(1)(1)(1) and 306(2)(1) respectively for each node. Road object data 306(1)(2)(2) and 306(2)(2) (2) are also included. Road data objects 306(1)(2) and 306(2)(2) can include information about roadside objects based on a localization model. This includes information about light poles, signs guard rails bridges, guard rails, guard rails, etc. Other data 306(1)(3) or 306(2) may be included in the node data records 306(1). These data can refer to different attributes of the nodes.

A content provider (e.g., map developer) may maintain the geographic database 123. The map developer might collect geographical data in order to enhance and generate the geographic database 123. Data may be obtained from businesses, municipalities or other authorities. The map developer might also employ field staff to explore a region and record information about roads. Remote sensing such as satellite or aerial photography may be used. The server 125 may have the database 123 integrated or connected to it.

“The geographic database number 123 and data contained within it may be licensed or made available on demand. Other traffic server providers or navigational services may have access to the data in the geographic database. 123 contains traffic data, location fingerprint data and/or predicted parking availability data. Other data, including roadside objects that can be used to locate an autonomous vehicle, may also be stored.”

“FIG. 11. This is an example flowchart that can be used to detect occlusions using sensor data. The FIGS. 6-10, which is discussed below, and/or another system. Referring to FIG. 8 acts 801-813 can be performed by map server 125. Referring to FIG. 7 acts 815 and 817 can be performed using mobile device 122. You may also be offered additional, different or fewer acts. This is the method as shown. You may provide other orders and/or perform additional acts in parallel.

“At act 801, sensor information for a scene are received by the map server125 and stored in memory 301 or the database 123. The sensor data may include ray data that has an origin point and an ending point. The processor 300 uses the sensor data to generate a grid for act 803. The processor 300 uses sensor data to generate the grid. For example, it traced a path from the origin to the end points in order to identify empty space, occupied space, and hidden space. The processor 300 detects objects in the scene at act 805.

“At act 807 an occlusion can be identified from the hidden area by the processor 300. The false negative is caused by the occlusion. The false negative could be a missing portion of sensor data, such as temporary objects blocking the sensor’s view. The location and severity of an occlusion can also be determined when identifying the source. The severity of an occlusion could be used to describe the extent of the occlusion, such as whether it is a complete occlusion or a partial occlusion.

“At act 809, the processor 300 analyzes the shape of the obstruction to determine the source. Act 811 updates the database 123 with additional sensor information for occlusion. Act 813 provides the localization model from the map server to a mobile device 122, for locating the mobile device on the scene.

“Act 815 captures sensor data by the distance detector 209 on the mobile device 122 to locate the mobile device within the scene. Act 817 is when the mobile device’s location is determined by processor 200. This is done by identifying an obstruction in the sensor data stored within memory 204 and based upon the map database of 143. An autonomous vehicle can detect occlusions to help understand the surrounding area.

The illustrations in this document are meant to give a broad understanding of the structure and functions of the different embodiments. These illustrations do not represent all elements or features of apparatuses and systems that use the methods and structures described herein. Those skilled in the art may discover other embodiments by reviewing the disclosure. You can use other embodiments, and draw from the disclosure. This allows for structural and logical substitutions as well as changes that do not exceed the scope of disclosure. The illustrations may not be scaled and are only intended to represent the subject matter. Some proportions in the illustrations might be exaggerated while others may be reduced. The disclosure and figures should be considered illustrative, not restrictive.

Although this specification includes many details, they should not be taken to limit the scope of invention or claim. Instead, they should be considered descriptions of specific features for particular embodiments. Some features described in this specification can be combined in one embodiment. However, features described in a single embodiment may also be used in other embodiments or in sub-combinations. Even though features are described as acting in particular combinations, even if they were initially claimed as such, some features can be removed from a combination. The claimed combination could also be directed at a subcombination, variation, or combination of subcombinations.

Summary for “Automatic detection of occlusion in road network data”

“Autonomous driving and augmented reality, as well as navigation applications, often rely on three-dimensional (3D), object detection based upon a 3D map of a local scene or model. Autonomous vehicles can sense the environment around them and match it to the 3D map. This is called localization. Localization relies on relevant objects, structures, and other localization objects within the vehicle environment that are present in the 3D maps.

Gaps in 3D maps, such as structures, missing objects and/or other localization objects can be detrimental to the performance or localization algorithms. Gaps in a 3D map are often due to false negatives. Object detection algorithms often produce gaps in the 3D map due to missing data. One reason for missing input data, such as LIDAR shadows or light detection and ranging (LIDAR), occlusion is one example. For example, the LIDAR scanner will only capture data of the semi-truck. The semi-truck is the only object captured by the LIDAR scanner, which results in a LIDAR obstruction and false negatives for any roadside objects.

“One embodiment provides a method to detect an occlusion using point cloud data. This method involves receiving point cloud data from a server and creating a grid representation from the server of the region of interest. The grid representation can include free, occupied and hidden space. This method can also be used to identify an obstruction in the area of interest using the hidden space provided by the server.

“An apparatus for detecting an obstruction from light detection and range (LIDAR), data is also provided in another embodiment. The apparatus comprises at least one LIDAR sensor and at most one processor. It also includes at least 1 memory that contains computer program code for at least one program. The processor is used to send LIDAR data to the apparatus for a particular region. The LIDAR data are then compiled into a spatial data structure as point clouds. The computer program code and memory are also set up to allow the apparatus to create a grid representation for the region of interest. This is done by evaluating the spatial structure to identify squares/cubes/voxels as free, occupied, or hidden squares. This allows the LIDAR data to be used to detect an obstruction in the LIDAR data, and generate a localization model based on the LIDAR data.

“Another embodiment provides a non-transitory computer-readable medium that includes instructions that, when executed, are operable to receive scene sensor data, which comprises an origin point or an end point. The sensor data can then be used to generate a grid to represent the scene. To generate the grid, one must trace a path from an origin point to an end point in order to identify hidden space, occupied space, and free space. Instructions also cover identifying objects within the scene from the occupied area and identifying an obstruction comprising a false positive in the objects detected from the hidden space.

“The current embodiments automatically detect the severity and location of occluded areas within input data. The detected occlusions can be used to indicate if additional data is needed to augment the input data or to report a gap in the input data. The resulting 3D model/map based on the input data can then be more precise and complete by detecting occlusions.

“For example, by using a light detection ranging (LIDAR), a grid representation is generated. Ray tracing (e.g. traversal) uses LIDAR data points (e.g. to track and identify spaces in the grid representation as hidden/occluded, occupied, or free). The grid is then bound, such as by using a lane-model for a roadway. Only a small portion of the grid representation is kept (e.g. space occupancy information within the roadway). The grid is then bounded in this example to identify obstructions caused by temporary objects blocking a roadside object’s view using a LIDAR scanner. The connected component analysis of the lane model is performed to identify connected regions of hidden spaces. Finally, thresholding the resulting connected areas of hidden space by size is done to indicate LIDAR obstructions.

“In order to generate a 3D vehicle location map/model, data collection devices are usually deployed using rooftop mounted scanners/sensors, such as LIDAR/laser, laser, Doppler or camera. For collecting 3D data to map/model objects near a roadway. Roadside objects are objects visible to vehicles on the roadway. These objects could include light poles and signs, guard rails, bridges or guard rails. While traveling on the roadway, the data collection vehicle captures 3D data about a specific region. The area of interest is usually bounded by distances (e.g. 15 meters beyond the roadway boundary). The vehicle localization model is then generated using object detection algorithms. The vehicle localization model can be used to locate a vehicle on the roadway (e.g., an auto- or mobile vehicle), by matching vehicle sensor data with the localization model. The ideal scenario is for the data collection vehicle to travel alone on the roadway without any traffic or other temporary objects, giving an unobstructed view over the entire roadway. In reality, traffic and other temporary objects can obscure the view of the data collection vehicle.

“FIG. “FIG. The perspective camera image shows the view from the vehicle’s perspective, along with sensor data collection 402 at the left roadway boundary. Sensor data collection 404 is at the right roadway boundary. The perspective camera image shows the large tanker truck 406 which will cause an obstruction in the sensor data collected from the data collection vehicle. (The occlusion can be seen in FIG. As discussed below, FIG. 2 shows the occlusion caused by a large tanker truck 406 (the occlusion is shown in FIG. ”

“FIG. “FIG. 2” illustrates an example occlusion in sensor information collected by the data collector vehicle. The left lane triangles indicate the direction of the vehicle that is traveling along the roadway. The sensor data collected from the vehicle by the data collector includes a substantial data set for data capture 402 of the left roadway border (i.e. no traffic obstructions) as well as a partial set for data collect 404 of right roadway boundary. FIG. FIG. 2 shows a gap in data set due to the large tanker truck 406 blocking the view of sensors on the data collection vehicle. The large tanker truck 406 will obscure the view of the sensors on the data collection vehicle and a 3D model of vehicle location/location created from this data set will not be complete. A false negative may be generated by object detection algorithms, indicating that there are no roadside objects in the area covered by the tanker truck 406. This could lead to a mapmaking error.

“FIG. FIG. 3 shows an embodiment of detecting an obstruction using sensor data from the data collection vehicle. To detect occlusions, an algorithm can be applied to sensor data (e.g. LIDAR data). A spatial data structure 502 is used to store the sensor data. Each data point that was collected by the data collector is included in the spatial data structure 502. Typically, the sensor data is three-dimensional. Sensor data can be of any N dimensions (e.g. N>2).

“For instance, the spatial data structure uses all the data points collected in the vehicle area to construct a probabilistic octree. You can also use other spatial data structures, such as the kd-tree or binary space partitioning (BSP)-tree. A spatial data structure such as a probabilistic or BSP-tree allows for intelligently representing sensor data, rather than just randomly representing it (e.g. in a random set of points). A spatial data structure is a way to organize sensor data by using information from other points. This allows sensor data points to be organized with respect to other points. For example, sensor points can be grouped by grouping them (e.g. by binning data) according to how close they are to other points (e.g. points within 10 cm). Sensor data points can also be grouped according to their location in a grid. Each sensor data point is then binned according the grid location. The sensor data points can be searched by reference to a grid position or by searching for data point in relation to another grid location (e.g. all points within 10cm of a grid place). You may also use other methods of grouping/binning. You can also use data dependent methods to group/bin data points, such as dividing the total points equally, or assigning weights.

Referring to FIG. “Referring to FIG. An embodiment of the invention uses sensor data to crop/clip based on boundary conditions or a region. When constructing a localization map for a highway, there are often occlusions due to traffic or other temporary objects. The localization model is created to capture stationary roadside objects such as signs, buildings, guardrails, and so on. This means objects located within 15 meters of the road boundary. The highway’s boundaries may define a region of concern for highway occlusions. You can also define other regions of interest based on different boundary conditions.

“As shown at 504, the boundary conditions are applied for the sensor data. To crop/clip sensor data within a roadway, it is possible to use known lane boundaries. Roadway boundary conditions can be applied to a highway lane model. To limit the areas where the algorithm can detect occlusions, it may be necessary to crop/clip sensor data. For temporary objects, additional and different boundary conditions can be applied.

A grid 504 can be generated by using the cropped/clipped sensor information. A cropped Boolean grid, for example, is made from data left after a region is identified and applied. Grid 504 shows cells with data points, after cutting/cropping the sensor data 502. Grid 504 shows cells with data points after cropping/clipping the sensor data 502. The grid’s right side shows blank spaces, which indicates possible occlusions in those locations, such as the tanker truck in FIG. 1.”

Based on sensor data, each grid cell/space can be classified as either free, occupied, hidden/unseen. Free spaces can be described as areas of roadway that are free from any permanent or temporary objects. Occupied spaces can be defined as areas of roadway that have a permanent or temporary object. Hidden/unseen areas may be designated as sections of the road that are not tagged with sensor data. You can use additional or different grid cell/space definitions. Each grid cell/space can be described as traversed or unstraveled, hidden or nothidden, etc. Based on sensor data.

“Applying the boundary conditions discussed above constrains the interpretation. Without boundary conditions, such as a lane model, hidden spaces can extend beyond the detection of objects. They also will be extended beyond the sensor range (e.g. beyond the 50-meter range of an LIDAR scanner). The boundary conditions limit the interpretation of hidden spaces to an area that is of interest. For example, occlusions that prevent a data collection vehicle capturing roadside objects.

“In this example, traversing/tracing LIDAR Rays through the region is used (as discussed below), to identify each cell/space in grid 504 as free/occupied, hidden/unseen/unseen. Sensor data is used as a point cloud, such LIDAR point clouds data, etc. Data points between a sensor and a detected object can be lost or ignored. Free space modeling can be used to generate additional data points for locations between sensors and detected objects. Data points between the sensor’s detector and detected objects are considered to have no objects. You can also use other interpretations of sensor data to determine grid cells/spaces for grid 504.

“FIG. 4. illustrates an embodiment for ray traversal of a sensor data points. FIG. FIG. 4 shows a section of a three-dimensional grid measuring two voxels in height, eight voxels in width and one voxel depth. From a sensor origin 602 up to the end point 604 a sensor ray is traced. The sensor ray traverses from a sensor origin 602 to an end point 604. It is a temporary object (e.g. an occlusion). Or a permanent object (e.g. an object to include into the localization model). You can trace the ray by overlaying it from the sensor origin 602 on the grid. Information gained through traversing the Ray may also be used to characterize grid spaces. Grid spaces between sensor origin 602 (or end point 604), such as free space 606, have been determined to be unobstructed by objects. This is because any object could block the ray’s path to end point 604. A grid space that corresponds to the sensor source 602 could also be considered a free space, as it may not contain any objects. This is because the sensor origin might assume that an object cannot be found in the same space as the sensor. Untraversed spaces 608 are not allowed to contain information.

By traversing the Ray, you can define free spaces (e.g. cells traversed between sensor 602 and sensor 604, and the cells containing sensor 602), an untraversed cell 608, and hidden/unseen space (e.g. the cell containing the end 604). The spatial data structure may store the untraversed and free spaces 606 in order to generate grid 504. A probabilistic octree, for example, assigns probabilities to untraversed space 608 and free spaces 606 respectively. These probabilities can be used to determine if a grid space in grid 504 is free, occupied, hidden/unseen, or both.

“Probabilities can be assigned based on multiple characterizations made of the same location using sensor data. One example is that locations with at least one free, unoccupied or binary 0 probability of hiding space are assigned a zero percent probability. All locations without data points are assigned a hundred percent probability (i.e. 100% or a Binary 1) probability. Alternately, different probability levels can be assigned to different locations. Different probability levels may be assigned to different locations depending on whether they were not traversed or if there are fewer data points at the location. For example, a single-digit number of datapoints might be used instead of a hundred, which would indicate a different order or magnitude of data point. Different probability levels can be assigned depending on how conservative a localization model defines occlusions. If a threshold number is set for determining probability levels (e.g. at least 100 data points), then locations that are not traversed more than this threshold are considered inconclusive. They are given a hidden space probability. If a location is only traversed 3 times, it will be assigned a probability 97% (or 97/100) of hiding space. Locations that are not reached the threshold are considered inconclusive and given a 100% probability of hiding space. You may also use other metrics to assign hidden space probabilities.

Referring back to FIG. “In an embodiment, referring back to FIG. The 504 diagram shows the thresholding process. The hidden space is designated as occluded areas after it has been thresholded. Grid squares can be used for a two-dimensional grid. Grid cubes and voxels can be used to create a three-dimensional grid.

“At 506, there are two occluded areas that have been labeled. An embodiment may analyze the spatial data structure to identify and label the obstructions. This information could include the source of the obstruction, temporary status, etc. Based on the occluded areas, a map database (e.g. the localization model), may be updated. Two occluded areas 406, 408 and 506 are listed. These two regions correspond to the 406 tanker truck shown in FIG. 1. The occluded areas are, for example, identified and labeled using artifacts from an octree such as large hidden spaces in a region of interest. The hidden space can be used to determine if the occlusion was present during data collection.

“Connected component analysis is used to analyze hidden spaces in an embodiment. Connected component analysis, an image processing technique that analyzes hidden spaces based on a specific similarity metric, is used to grow a region from a seed point up until a stopping criteria has been met. To identify hidden spaces, the seed point 406, 408 is used in this embodiment. Large sections of hidden space can be identified and thresholded by disjointed sets voxels. In 506, the occluded area 406 is identified to be a connected component, while the occluded regions 408 and 408 are identified as connected components. As a third component, the free space 410 could also be identified. These connected components can be used to identify occlusion sources.

“For example: shape analysis, statistical analysis etc. To determine the cause of the hidden spaces, it may be possible to perform shape analysis or statistical analysis. Analyzing the occluded areas can help to identify the source of the obstruction. The occluded areas 406, 408 are examples of a truck’s design. The truck’s cab shape is represented by the occluded area 406 and the truck body shape by the occluded area 408. You can also identify other patterns based on the types of occlusions (e.g. walls, barriers and construction signs). Analysing the occluded areas 406, 408 may also identify the occluded region 406, 408 as traffic, permanent occlusions or construction occlusions.

“In one embodiment, the analysis of the occlusions could drive a decision about whether to collect additional sensor data or map an occlusion site. If the source of an obstruction is temporary, such as a truck 406, then targeted mapping may be done by collecting additional sensor data to determine the locations 406, 408. Targeted remapping might not work if the source of the obstruction is a permanent object such as a wall. After analyzing the occlusion one or more actions can be taken, including the incorporation of supplemental data and updating the map database with false negatives (e.g., instructions to proceed with caution or reporting that there is no data for this location).

The present inventions can significantly improve mapmaking by automatically detecting sensor data occlusions. False negatives can be reduced or eliminated by detecting them, improving the accuracy and completeness the final map. Additional data can be integrated into the mapmaking process by efficiently detecting and analysing occlusions. This may include targeted remapping and the integration of additional data and/or labeling regions on the map as potentially false positive areas.

The present inventions can also significantly improve activities that rely on a generated map to localize, such as autonomous driving and navigation. A map that is more accurate due to detecting occlusions can be used by autonomous vehicles, navigation devices, and/or augmented reality devices to locate vehicles and/or devices. This will increase safety, accuracy, and provide a better user experience. An autonomous vehicle or a mobile device may detect occlusions in real time, which allows for better localization and better match of objects to the map. An autonomous vehicle or mobile device can detect occlusions in real time, which allows it to identify temporary and permanent objects in a scene. This will allow the vehicle/mobile device to determine a location more accurately, provide a better user interface, and so on. Augmented reality applications can also be enhanced by better understanding the scene around it by detecting occlusions.

“FIG. 5. This is an example flowchart to detect an obstruction from point cloud data. The FIGS. 6-10, which is discussed below, and/or another system. Referring to FIG. Referring to FIG. 6 (discussed above), the method can be performed by a server.125 and 121. An autonomous vehicle, or another mobile device may collect the point cloud data. For example, mobile device 122 can be used as probe 131. You may also receive additional, different or fewer actions. This is the method as shown. You may provide other orders and/or perform additional acts in parallel.

Point cloud data may be used to detect occlusions for mapmaking. Two stages, or layers, of mapmaking can be performed in an embodiment. To map roads, for example, a lane model can be generated to identify and map the painted lines along the roadway. To identify roadside objects, a localization model is generated. Occlusions can be detected at both the mapmaking and mapping stages. Each stage defines different regions of interest.

“Referring to FIG. 5. At act 701, point clouds data are captured or received. The point cloud data can be received by a data collection vehicle (e.g., a mobile device 122) and transferred to a cloud-based server 125 via a network 127. The point cloud data may be stored in a database 123 by the cloud-based server 125. Referring to FIG. Referring to FIG. 7, the data collection vehicle might employ a distance detector 209, such as a LIDAR scanner. An embodiment of the data collection vehicle has a plurality laser and receivers that are deployed in different directions. For example, there may be thirty-two lasers that tilt in different directions and angles. Alternately, the distance detector 209 could be one or more cameras or sensors that capture point cloud data. The server 125 stores the point cloud data in a spatial structure such as a probabilistic Octree. An embodiment combines the complex geometries from LIDAR sensor data (i.e. caused by vehicle movement and spinning sensors) into an octree representation.

Referring to FIG. “Referring back to FIG. Grid representations are generated by server 125 from point cloud data stored in the database 123. Any boundary conditions that are specific to the scene will be used as the basis for the region of interest. For example, a traffic lane model of the roadway is used to generate a localization model. This database 123 server stores the traffic lane model. The grid is created in an embodiment by using point cloud data to identify free, occupied, and hidden spaces. The free spaces correspond to locations between each sensor and an object location identified by the sensor. The occupied spaces corresponds to object locations identified by, while the hidden spaces corresponds to locations without any sensor data. Analyzing the point cloud data using probabilistic octree representation, you can identify the hidden, occupied, and free spaces. Three LIDAR rays could have inconsistent data, for example. The probabilistic octree consolidates all data and assigns an occupancy probability for each location. If the point cloud data contains three points for the same place, one is free and two are occupied, the probabilistic octree assigns a probability of occupancy for each location at sixty six percent. Probability as free and probability as occupied are different. Probability as occupied can be assigned to this location. To generate the grid representation of each location, probabilities may be used. For example, threshold probabilities can be applied to each space to determine its character.

“At act 705, an obstruction is identified in the area of interest. Based on the grid representation’s hidden spaces, the developer system 121 of server 125 determines the location of the occlusion. It is possible to identify the location of the obstruction and, in some embodiments, also the severity of the obstruction. The severity of an occlusion could include its size or completeness. A small, temporary object such as a vehicle or small car in the area of the data collection vehicle may cause partial obstruction. Although the partial obstruction may not completely obscure the view of roadside objects, it may reduce the sampling rate by data collection vehicles (e.g. no full gaps in sensor information, but a lower sampling speed due to some rays hitting small vehicles and roadside objects). A full occlusion could be described as the entire size and shape of a tanker truck 406 completely blocking all roadside objects. There may also be intermediate levels of occlusion.

“Act 707 defines the occlusion by the developer system 121, which is based on the form of the hidden spaces. The connected component analysis of hidden spaces determines the hidden space shape. This is how the occlusion can be classified as either a temporary or permanent. You may also need to determine the source of the obstruction.

“At act 709, a request is made for additional point cloud data for the location of the occlusion. After identifying the occlusion to be temporary, the request is generated using the developer system 121 on the server 125. Alternately, or in addition, act 711 updates the map database 123 on the server 125 based on the occlusion data and/or the supplemental point clouds data.

“Act 713 provides an updated map database123 to an autonomous vehicle’s map database 122, which is used for locating the vehicle within the scene. The updated map database 133, which is used to sense the environment around the vehicle, matches it with the updated database 133. The autonomous vehicle can sense roadside structures and objects, and match them to the updated map database to determine the location of the vehicle. Once the location is determined, the autonomous vehicle can navigate the vehicle and then control it.

“FIG. FIG. 6 shows an example system 120 to update the map database. FIG. FIG. 6. One or more mobile devices 122 include probes 131, and are connected to server 125 through the network 127. The server 125 also has a database 123 that includes the server map. The server map can include a lane and localization model. A developer system 121 is composed of the database 123, and the server 125. Based on the data collected by the probes 131, the developer system 121 generates server map updates. Multiple mobile devices 122 can be connected to the server via the network 127. Mobile devices 122 can be used as probes or be combined with probes. The probes 131 gather point cloud data that is used by the development system 121 to update server maps. Mobile devices 122 could also include autonomous vehicles. Databases 133 that correspond to device maps are included in the mobile devices 122. Based on the server map, the device maps can be updated. You may include additional, different or fewer components.

“For example, point clouds are collected by data collection vehicle and/or mobile devices 122 to update server map at server 125. Vehicle maps are not updated directly by the data collection vehicles or other mobile devices 122. Instead, data collection vehicles and/or mobile devices 122 collect sensor data using probes 131, and then upload it to the server 125. The server 125 provides periodic map updates to data collection vehicles and/or mobile devices 122. This is done by aggregating sensor data from mobile devices 122, and updating the server map with the aggregated sensor data. Map updates are sent to autonomous vehicles, navigation devices and augmented reality devices for the purpose of localizing them in an environment that is based on a local database 143.

“The mobile device number 122 could be a personal navigation system (?PND?) “The mobile device 122 may be a personal navigation device (?PND?). ), a watch or a tablet computer, a notebook or any other mobile device or personal computer that is known or later developed. A mobile device 122 could also include an automobile head unit, infotainment or any other automotive navigation system. Other non-limiting examples of navigation devices include mobile phones, infotainment systems, and navigation devices for air and water travel.

“Communication between mobile devices 122, and the server 125 via the network 127 can use a variety different types of wireless networks. Wireless networks can include cellular networks. These networks may use the IEEE 802.11 family of protocols, Bluetooth family of protocols, and other protocols. Cellular technologies can include the analog advanced mobile phone system, AMPS, third generation partnership project (3GPP), code Division Multiple Access (CDMA), personal handyphone system (PHS), 4G or long-term evolution (LTE), or any other protocol.

“FIG. “FIG. 6. The mobile device 122 can be used as an autonomous vehicle or augmented reality device. The mobile device 122 contains a processor 200 and a map database, 143, memory 204, an input device, 203, and position circuitry, 207. A distance detector 209, display 211, camera 213. The mobile device 122 can have additional, different or fewer components.

“The distance detector 209 can receive sensor data that indicates roadside objects such as signs, light poles and guard rails. The distance detector 209 can emit a signal and then detect a return signal. The signal could be either a laser signal or a radio signal. The distance detector 209 can determine a vector (distance or heading) between the location of the mobile device 122 and the position on the roadside objects. The camera 213 can also be set up to detect roadside objects. The camera 213 can collect images to calculate the distance to roadside objects. One example is the number of pixels, or the relative size of roadside objects. This indicates the distance. Lesser objects on the roadside are closer. Another example is the relative distance between two or more images. If two images are taken at a certain distance, for example, the relative differences in roadside objects indicate the distance between them.

“The position detector (or position circuitry 207) is used to determine the geographic position of mobile device 122 and detected roadside objects. The vehicle’s position during collection of roadside object sensor data may determine the geographic position. This information is used by server 125 for updating map database 123. The processor 200 may also use the geographical position to match roadside objects detected using the distance detector 209 and map database 143 during localization.

“The positioning circuitry (207) may include suitable sensing units that measure the distance, speed, direction, etc. of the mobile device 122. A receiver and a correlation chip may be included in the positioning system to receive a GPS signal. Alternately, or in addition, one or more sensors or detectors may also include an accelerometer or a magnetic sensor embedded within or into the mobile device’s interior 122. The accelerometer can detect, recognize or measure the rate at which the mobile device’s translational and/or rotateal movements change 122. The magnetic sensor or a compass is designed to provide data indicative of the heading of the mobile phone 122. The orientation of the mobile phone may be indicated by data from the magnet sensor and the accelerometer 122. The positioning system sends data to the mobile device 122. The location data indicates where the mobile device is located 122.

“The positioning circuitry 207 could include a Global Navigation Satellite System, (GPS), or a cellular position sensor to provide location data. The positioning system can use GPS-type technology, dead reckoning-type systems, cellular locations, or combinations thereof. The positioning circuitry (207) may also include suitable sensors that measure the distance, speed and direction of the mobile device 122. A receiver and a correlation chip may be included in the positioning system to receive a GPS signal. The positioning system sends data to the mobile device 122. The location data indicates where the mobile device is located 122.

“The position circuitry (207) may also include magnetometers and accelerometers or any other device that tracks or determines movement of a mobile phone. The gyroscope can detect, recognize, and measure the current or future orientations of a mobile device. The gyroscope orientation detection can be used to measure the yaw and pitch of the mobile device.

“The mobile device 122 could be integrated into a vehicle that may include a data collection vessel, assisted driving vehicles such a autonomous vehicle, highly assisted driving (HAD), or advanced driving assistance systems (ADAS). Mobile device 122 may include any of these assist driving systems. An assisted driving device can also be integrated into the vehicle. An assisted driving device could include memory, a processor and systems to communicate directly with the mobile device (122).

“Autonomous vehicle” may be used to refer to a self driving or driverless vehicle in which there are no passengers. An autonomous vehicle can also be called a robot vehicle, or an automated vehicle. An autonomous vehicle can carry passengers but no driver is required. These autonomous vehicles can park themselves and move cargo around without the need for a human operator. Autonomous vehicles can have multiple modes, and may transition between them. Based on vehicle database 133, including the road object attribute, the autonomous vehicle can steer, brake, and accelerate the vehicle. The vehicle’s environment is detected by distance detector 209 which matches the environment to the map database 143. To match the environment to the map database, the autonomous vehicle can also use position circuitry (207). The autonomous vehicle might sense roadside objects and match them to the map database. This could allow it to determine the position and orientation of its autonomous vehicle. The system can then navigate and control the vehicle once it has determined its position and orientation.

A vehicle that is highly assisted in driving (HAD), may not fully replace the human operator. In a highly assisted driving mode, however, the vehicle can perform certain driving functions while the human operator may take over some of the driving duties. A manual mode allows the operator to have some control over vehicle movement. There may be a completely autonomous mode. There are other levels of automation. HAD vehicles can control the vehicle by steering or braking, depending on the vehicle database 133 and the road object attribute. The HAD vehicle detects the environment around the vehicle with distance detector 209 and matches it to the map database 143. To match the environment to the map database, the HAD vehicle can also use position circuitry (207). The HAD vehicle can sense roadside objects and match them to the map database to determine the position and orientation. The HAD vehicle can then navigate and control the vehicle once it has determined its position and orientation.

“ADAS vehicles also include partially automated systems that alert the driver. These features can be used to prevent collisions. Some features include adaptive cruise control, automatic braking, and steering adjustments to ensure the driver stays in the right lane. ADAS vehicles can issue warnings to the driver based upon the traffic estimation level for a current or upcoming link based 133 vehicle database including the road object attribute. The ADAS vehicle detects the environment around it using distance detector 209, and matches it to the map database 143. To match the environment to the map database 143, the ADAS vehicle can also use position circuitry 207. The ADAS device may sense roadside objects and match them to the map database to determine the vehicle’s position and orientation. The ADAS device can then navigate and control the vehicle once it has determined its position and orientation.

The vehicle database 133 may be used to generate a routing instructions by the mobile device 122. The mobile device 122 can be set up to execute routing algorithms that determine the best route to follow along a road network, from an origin location to a location in a geographical region. A mobile device 122 analyzes possible routes between the origin and destination locations using inputs such as map matching values from server 125 to determine the best route. The navigation device 122 may provide information to the user about the best route. This guidance will identify the steps that must be followed to get from the origin location to the destination. A few mobile devices 122 display detailed maps that show the route and the type of maneuvers required at different locations along the route. They also indicate the locations of certain features.

“The mobile device 122 may plan a route through an existing road system or modify an existing route through the road system using the matched probe data. The route can be extended from the current location of the mobile device, or an origin, to a destination via the road segment matched by the probe data. The Dijkstra method, A-star search or algorithm, and/or any other route exploration or calculation algorithms may be used to calculate possible routes. However, it is possible to adjust the cost values for the underlying road segments. Other aspects such as distance, navigational difficulties, and restrictions may also be taken into consideration when determining the optimal route.

“The controller 200 or processor 300 could include a general processor or digital signal processor, an app specific integrated circuit (ASIC), a field programmable gate array, (FPGA), analog circuit or digital circuit, or combinations thereof or any other currently known or later developed processor. Controller 200 and/or 800 can be one device or a combination of devices such as those that are associated with a network, distributed computing, or cloud computing.

“The memory 204 or memory 301 can be either a volatile memory (RAM), random access memory (RAM), or a nonvolatile memory. Memory 204 and/or 301 could include one or more read-only memory (ROM), random memory (RAM), flash memory, electronic erasable programme read only (EEPROM), or another type of memory. The memory 204 or memory 801 can be removed from the mobile device 122, such as a secure digital memory card (SD)).

“The communication interface (205/305) may include any operable link. An operable connection can be any one that allows signals, physical communications, or logical communications to be sent and/or taken. An operable connection can include a physical, electrical, and/or data interface. The communication interface (205) and/or the communication interface (305) allow for wired and/or wireless communications in any format, now known or later developed.

“The databases 123-133, 143 and 143. Geographic data may be used for traffic-related and/or navigation-related purposes. Geographic data can include data that represents a road network, including node and road segment data. Road segment data are roads. Node data is the intersections or ends of roads. Road segment data and node data are used to indicate the locations of roads and intersections, as well as other attributes. Geographic data may also be available in other formats than nodes and road segments. Geographic data can include structured cartographic data and pedestrian routes.

The databases could also contain other attributes about or about roads, such as street names, addresses ranges and speed limits, and/or turn restrictions at intersections (e.g. one or more road segments are part of a highway/tollway, or the location of stop signs or stoplights along the road segments), and points of interest (POIs), like gasoline stations, restaurants, hotels, museums, stadiums and offices, automobile dealers, auto repair shops and buildings, stores, parks, and so on. One or more node records may be included in the databases. These may include attributes (e.g. about intersections), such as geographic coordinates and street names, address ranges and speed limits at intersections. Other navigational attributes may also be present, such as POIs like gasoline stations, hotels, restaurants museums, stadiums offices, automobile dealers, auto repair shops, buildings stores, parks, etc. Other data records may be added or removed from the geographic data. These include POI data records and topographical data records as well as cartographic data records and routing data.

The databases could contain historical traffic speed data for one or several road segments. Traffic attributes may be included in the databases for some or all of the road segments. Traffic attributes can indicate whether a road segment is likely to be congested.

“The input device203 could be one or more of the following: keypad, keyboard or mouse, trackball, rocker switch or touch pad, voice recognition circuit or any other device or component that allows data to be entered into the mobile device 122. Combining the input device 203 with display 211 can create a touch screen that may be capacitive, resistive, or both. Display 211 could be either a liquid crystal display panel (LCD), light emitting diode screen (LED) or thin film transistor screen. Audio capabilities or speakers may be included in the output interface 211. An embodiment of the input device 203 could include a device with velocity detection capabilities.

“Computer-readable medium” is a term that refers to a computer-readable medium. A single medium can be used as well as multiple media such a distributed or centralized database and/or associated caches or servers that store one or several sets of instructions. Computer-readable medium is also used. Any medium capable of storing or encoding instructions that can be executed by a processor, or that causes a computer system perform any of the operations or methods described herein shall also be included.

“In an exemplary, non-limiting embodiment, the computer-readable media can contain a solid-state storage device such as a memory card, or any other packaging that contains one or more nonvolatile read only memories. The computer-readable medium may also include a random access or volatile re-writable storage. The computer-readable medium may also include a magnetooptical or optical medium such as disks, tapes, or any other storage device that captures carrier wave signals, such as those transmitted over a transmission medium. The disclosure can be interpreted as including any or all of the following: a distribution medium, a computer-readable medium, and equivalents, and successor media in which data and instructions might be stored.

“In another embodiment, dedicated hardware implementations such as application-specific integrated circuits, programmeable logic arrays and others can be built to implement any of the methods described. The apparatus and systems described herein can be used in a wide variety of applications. A few of the embodiments described herein can implement functions using interconnected hardware modules or devices that have related control and data signals. These signals can be transmitted between and through modules or as parts of an application-specific integrated Circuit.

“The methods described in the present disclosure may be implemented using software programs that can be run on a computer system. In an exemplary embodiment, parallel processing is possible. Virtual computer system processing is also possible to implement any of the methods and functionality described herein.

“The present specification describes components, functions that can be implemented in specific embodiments using particular protocols and standards. However, the invention isn’t limited to these protocols and standards. Examples of the state-of-the art include standards for Internet and other packet switching network transmissions (e.g. TCP/IP/IP UDP/IP HTML, HTTP, HTTPS). These standards are often superseded periodically by faster or more efficient alternatives with essentially the same functions. Consequently, replacement standards or protocols with the same functions or similar functions to those described herein are considered equivalents.

Computer programs can also be called software, script, code, program, software, application, script or code. They can be written in any programming language, including compiled and interpreted languages. A computer program is not always associated with a file within a file system. You can store a program in any part of a file that contains other programs or data. You can deploy a computer program to run on one computer, or on multiple computers located at the same site.

“The process and logic flows described here can be executed by one or more programmable CPUs. These processors execute one or more computer programs that perform functions, such as operating on input data or generating output. These processes and logic flows are also possible to be executed by special purpose logic circuitry.

“As used here, the term “circuitry” is or ?circuit? refers to all of the following: (a) hardware-only circuit implementations (such as implementations in only analog and/or digital circuitry) and (b) to combinations of circuits and software (and/or firmware), such as (as applicable): (i) to a combination of processor(s) or (ii) to portions of processor(s)/software (including digital signal processor(s)), software, and memory(ies) that work together to cause an apparatus, such as a mobile phone or server, to perform various functions) and (c) to circuits, such as a microprocessor(s) or a portion of a microprocessor(s), that require software or firmware for operation, even if the software or firmware is not physically present.”

“This definition of circuitry? This applies to all claims and uses of the term “circuitry” in this application. The term “circuitry” is used in this application as well. This would include the implementation of just a processor or multiple processors, or a portion thereof and any accompanying software and/or firmware. “circuitry” is a broad term. The term “circuitry” would also include, depending on the claim element, a baseband or applications processor integrated circuit for mobile phones or similar integrated circuits in servers, cellular network devices, or any other network device.

For the execution of computer programs, processors are suitable for both general and specific purpose microprocessors. They can also be used to run any number of processors from any type of digital computer. A processor can receive instructions and data from either a random access or read-only memory. A processor is responsible for performing instructions. There are also one or more memory devices that store instructions and data. A computer usually includes one or more mass storage devices to store data. These devices are not required for a computer. A computer can also be embedded in other devices, such as a mobile phone, a personal assistant (PDA), mobile audio player, and a Global Positioning System receiver (GPS). Computer-readable media are suitable for storing program instructions and data. They include all types of non-volatile memory media, media, and memory devices. Special purpose logic circuitry can be added to the processor or integrated into the memory. A vehicle can be considered a mobile device in one embodiment. Alternatively, the mobile device could be integrated into a vehicle.

“To allow interaction with a person, the embodiments described in this specification may be implemented on a device that has a display. For example, a CRT or LCD monitor can display information to the user. A keyboard and a pointing device (e.g. a trackball or mouse) can also be used to provide input to the computer. You can also use other devices to interact with the user. For example, feedback can be provided in visual, auditory, tactile, and/or audio form to the computer. Input from the user can come in any format, such as speech, acoustic, or tactile.

“Computer-readable medium” is a term that refers to a computer-readable medium. A single medium can be used as well as multiple media such a distributed or centralized database and/or associated caches or servers that store one or several sets of instructions. Computer-readable medium is also used. Any medium capable of storing or encoding instructions that can be executed by a processor, or that causes a computer system perform any of the operations or methods described herein shall also be included.

“In an exemplary, non-limiting embodiment, the computer-readable media can contain a solid-state storage device such as a memory card, or any other packaging that contains one or more nonvolatile read only memories. The computer-readable medium may also include a random access or volatile re-writable storage. The computer-readable medium may also include a magnetooptical or optical medium such as disks, tapes, or any other storage device that captures carrier wave signals, such as those transmitted over a transmission medium. The disclosure can be viewed as including any combination of a computer-readable medium, a distribution medium, and other equivalents or successor media in which data and instructions may be stored. These examples can be collectively called a non-transitory computer-readable medium.

“In another embodiment, dedicated hardware implementations such as application-specific integrated circuits, programmeable logic arrays and others can be built to implement any of the methods described. The apparatus and systems described herein can be used in a wide variety of applications. A few of the embodiments described herein can implement functions using interconnected hardware modules or devices that have related control and data signals. These signals can be transmitted between and through modules or as parts of an application-specific integrated Circuit.

“Embodiments of subject matter described here can be implemented in a computing device that has a back-end component, such as a data server or middleware component. Or a front-end component, such as a client computer with a graphical user interface, or a Web browser, through which a user can interact to an implementation of this subject matter. Any form of digital data communication can connect the components of this system, such as a communication network. A local area network (?LAN?) is one example of a communication network. A wide area network (?) and a local area network (?) and a wide area network (?WAN?)

“The computing system may include both clients and servers. Client and server are usually separated and interact via a communication network. Computer programs on each computer create a client-server relationship.

“In an embodiment, data are captured and uploaded to cloud storage in order to perform offline analysis. An example of this is offline analysis that may be performed months after data was collected in a mapping process. Analyses can also be done in real time to obtain a real-time description of an obstruction in the area of interest. LIDAR data is collected from a LIDAR sensor in a region of interest and sent to a cloud server. LIDAR data are assembled in point cloud data within a spatial structure. A grid representation of the area of interest is created by evaluating this spatial data structure. Each square in the grid can be classified as either a free, occupied, or hidden square. To characterize each square, one can trace LIDAR rays between the sensor origin points and the corresponding end points. The locations between the sensor origin point and the corresponding ends points are the free squares. The occupied squares correspond with locations associated to the end points. While the hidden squares corresponds to untraced locations, the occupied squares corresponds to the locations of the end points. A LIDAR data can detect an occlusion based on hidden squares in grid representation. A localization model could be generated using the LIDAR and detected occlusion.

“FIG. “FIG. 6. The server 125 contains a processor 300 and a communication interface305. A memory 301 and a database123. To enter settings into the server 125, an input device (e.g. keyboard or personal computer), may be used. The server map may be included in the database 123, including a lane and localization model. The server 125 may contain additional, different or fewer components. FIGS. FIGS. 5 and 11 are examples of flowcharts that illustrate the operation of server125. You may also provide additional, different or fewer actions.

“The geographic database123 contains a lane model as well as a localization model. The lane model contains a map showing the painted lines along a roadway. It also includes information about the boundaries and lanes of the roadway. The localization model contains a map showing roadside objects that extend beyond the roadway boundaries (e.g. 15 meters). The lane model or localization models can be either two-dimensional (2D), 3-dimensional (3D), or of another dimension (e.g., with n>3).

“The memory301 is designed to store roadside sensor data. The memory 301 can store temporary data from the database 123. Memory 301 stores portions of the database 123 for comparison with the sensor data received.

“The communication interface305 can receive sensor data from multiple data collection vehicles and/or smartphones. The communication interface 305 can also transmit map data such as lane and location model updates to mobiles devices and autonomous vehicles.

“The processor 300 can detect occlusions within the received sensor data as described above. Based on the detected occlusion, the geographic database 123 gets updated. The database 123 includes supplemental sensor data, such as data from targeted remapping, to help locate the occlusion. Alternately, or in addition, the database 123 can be updated to report the location. This could include a warning to proceed cautiously or a warning that unreported roadside items may exist.

“In FIG. FIG. 9 shows that the geographic database 123 could contain at least one road segment record 304 (also known as?entity or?entry?). 9 The geographic database 123 may contain at least one road segment database record 304 (also known as?entity? Each road segment within a specific geographic area. Local databases 133 may also be able to use any of the features in geographic database 123. A node database record 306 may be included in the geographic database 123 (or?entity?) The geographic database 123 may also include a node database record 306 (or?entity?). Each node within a specific geographic region. The terms “nodes” and “segments” are interchangeable. The terms?nodes? and?segments are used to describe these physical geographic features. These terms are used to describe the physical features of the geographic feature. Other terminology is not intended to be included within these concepts. Geographic database 123 could also contain location fingerprint data for specific geographic locations.

“The geographic database 123 could also include data 310. Other types of data 310 could represent different kinds of geographical features or any other type of data. Other types of data include POI (point of interest) data. The POI data could include, for example, POI records that contain a type (e.g. the type of POI such as restaurant or hotel, city halls, police stations, historical markers, ATMs, golf courses, etc. The location of the POI, phone number, hours of operation, and other details.

Indexes 314 are also included in the geographic database 123. Indexes 314 could include different types of indexes which relate data from different sources to one another or to other aspects of the data in the geographic database.123. The indexes 314, for example, may link the nodes of the node data record 306 to the end points in road segments in road segment data records 304. The indexes 314, for example, may link road object data 308 or road object attributes with road segments in the segment data record 304. An index 314 could, for instance, store data that relates to one or more locations of the road object attribute 308 at each location. Road object attribute 308 can describe the type and relative location of road objects, as well as the angle between road segments and road objects.

“The geographic database123 may also contain other attributes about or about roads, such as geographic coordinates and physical features (e.g. lakes, rivers, railroads etc.). street names, address ranges and speed limits at intersections. One or more node records 306 may be included in the geographic database 123. These records may contain attributes (e.g. about intersections), such as street names, address ranges and speed limits. Other attributes include attributes such as geographic coordinates, street names, intersections, speed limits and turn restrictions at intersections. Other data records may be added or removed from the geographic data 302, such as POI data records and topographical data records. Cartographic data records, routing records, and maneuver data are all possible options. The database 123 also contains information relevant to this invention, including temperature, altitude, elevation, lighting, sound and noise level, wind speed, magnetic interference or radio-and-micro-waves, cell tower, wi-fi and access points information and attributes that relate to specific approaches to a particular location.

“The geographic database123 could include historical traffic speed data from one or more road segments. Traffic attributes may be included in the geographic database 123 for some road segments. Traffic attributes can indicate whether a road segment is likely to be congested.

“FIG. “FIG. A segment ID 304(1) may be included in road segment data record. This allows the data record to be identified in geographic database 123. Each road segment data file 304 may include information associated with it (such as?attributes’,?fields’, etc.). This describes the characteristics of the road segment. Data 304(2) may be included in the road segment data record. This data indicates any restrictions on the direction of vehicular traffic on the segment. Data 304(3) may be included in the road segment data record. This data indicates a speed limit, or speed category, which is the maximum allowed vehicular speed on the segment. Road segment data record304 can also contain classification data 304(4) that indicates whether the road segment is part a controlled road (such as an expressway, ramp to controlled access roads, bridges, tunnels, toll roads, ferry, etc. The road segment data record can also include location fingerprint data. For example, a set or sensor data for a specific location.

“The geographic database may contain road segment data records (or entities) that describe features like road objects 304(5). Road objects 304(5) can be stored according to their location boundaries or vertices. Road objects 304(5) can be stored in a field or record with a range of values, such as 1 to 100 for type and size. You can store road objects in categories like low, medium, and high. You may also use additional schema to describe road objects. Attribute data can be stored in relation with a link/segment 306, a node 306, or a strand. It may also contain information about a location fingerprint, an area or a region. The settings and information for display preferences may be stored in the geographic database 123. A display may be connected to the geographic database 123. Displays can be set up to display data entities and the roadway network using different colors. For example, the geographic database 123 might display different information about open parking spots.

“The road segment data file 304 also contains data 304(7) which provides the geographical coordinates (e.g. latitude and longitude of the end points) for the road segment. Data 304(7) refers to node data records 306 which represent the nodes that correspond to the end points of the road segment.

“The road segment data file 304 could also contain or be associated with additional data 304(7) which refers to different attributes of the road segment. You may include all the attributes associated with a road section in one road segment record. Or, you may include them in multiple records that cross-reference to one another. Road segment data record 304 might include data such as the intersections at each node that correspond to the road section being represented, the names or names of the street segments, and the turn restrictions at each intersection.

“FIG. 10. also displays some components of a 306 node data record that could be found in the geographic database. 123. Each node data record 306 could have associated information, such as?attributes’,?fields’, etc. This allows for identification of the road segments that connect to it, and/or their geographic location (e.g. its latitude/longitude coordinates). Node data records 306(1)(2), 306(2) contain the latitude/longitude coordinates 306(1)(1)(1) and 306(2)(1) respectively for each node. Road object data 306(1)(2)(2) and 306(2)(2) (2) are also included. Road data objects 306(1)(2) and 306(2)(2) can include information about roadside objects based on a localization model. This includes information about light poles, signs guard rails bridges, guard rails, guard rails, etc. Other data 306(1)(3) or 306(2) may be included in the node data records 306(1). These data can refer to different attributes of the nodes.

A content provider (e.g., map developer) may maintain the geographic database 123. The map developer might collect geographical data in order to enhance and generate the geographic database 123. Data may be obtained from businesses, municipalities or other authorities. The map developer might also employ field staff to explore a region and record information about roads. Remote sensing such as satellite or aerial photography may be used. The server 125 may have the database 123 integrated or connected to it.

“The geographic database number 123 and data contained within it may be licensed or made available on demand. Other traffic server providers or navigational services may have access to the data in the geographic database. 123 contains traffic data, location fingerprint data and/or predicted parking availability data. Other data, including roadside objects that can be used to locate an autonomous vehicle, may also be stored.”

“FIG. 11. This is an example flowchart that can be used to detect occlusions using sensor data. The FIGS. 6-10, which is discussed below, and/or another system. Referring to FIG. 8 acts 801-813 can be performed by map server 125. Referring to FIG. 7 acts 815 and 817 can be performed using mobile device 122. You may also be offered additional, different or fewer acts. This is the method as shown. You may provide other orders and/or perform additional acts in parallel.

“At act 801, sensor information for a scene are received by the map server125 and stored in memory 301 or the database 123. The sensor data may include ray data that has an origin point and an ending point. The processor 300 uses the sensor data to generate a grid for act 803. The processor 300 uses sensor data to generate the grid. For example, it traced a path from the origin to the end points in order to identify empty space, occupied space, and hidden space. The processor 300 detects objects in the scene at act 805.

“At act 807 an occlusion can be identified from the hidden area by the processor 300. The false negative is caused by the occlusion. The false negative could be a missing portion of sensor data, such as temporary objects blocking the sensor’s view. The location and severity of an occlusion can also be determined when identifying the source. The severity of an occlusion could be used to describe the extent of the occlusion, such as whether it is a complete occlusion or a partial occlusion.

“At act 809, the processor 300 analyzes the shape of the obstruction to determine the source. Act 811 updates the database 123 with additional sensor information for occlusion. Act 813 provides the localization model from the map server to a mobile device 122, for locating the mobile device on the scene.

“Act 815 captures sensor data by the distance detector 209 on the mobile device 122 to locate the mobile device within the scene. Act 817 is when the mobile device’s location is determined by processor 200. This is done by identifying an obstruction in the sensor data stored within memory 204 and based upon the map database of 143. An autonomous vehicle can detect occlusions to help understand the surrounding area.

The illustrations in this document are meant to give a broad understanding of the structure and functions of the different embodiments. These illustrations do not represent all elements or features of apparatuses and systems that use the methods and structures described herein. Those skilled in the art may discover other embodiments by reviewing the disclosure. You can use other embodiments, and draw from the disclosure. This allows for structural and logical substitutions as well as changes that do not exceed the scope of disclosure. The illustrations may not be scaled and are only intended to represent the subject matter. Some proportions in the illustrations might be exaggerated while others may be reduced. The disclosure and figures should be considered illustrative, not restrictive.

Although this specification includes many details, they should not be taken to limit the scope of invention or claim. Instead, they should be considered descriptions of specific features for particular embodiments. Some features described in this specification can be combined in one embodiment. However, features described in a single embodiment may also be used in other embodiments or in sub-combinations. Even though features are described as acting in particular combinations, even if they were initially claimed as such, some features can be removed from a combination. The claimed combination could also be directed at a subcombination, variation, or combination of subcombinations.

Click here to view the patent on Google Patents.

How to Search for Patents

A patent search is the first step to getting your patent. You can do a google patent search or do a USPTO search. Patent-pending is the term for the product that has been covered by the patent application. You can search the public pair to find the patent application. After the patent office approves your application, you will be able to do a patent number look to locate the patent issued. Your product is now patentable. You can also use the USPTO search engine. See below for details. You can get help from a patent lawyer. Patents in the United States are granted by the US trademark and patent office or the United States Patent and Trademark office. This office also reviews trademark applications.

Are you interested in similar patents? These are the steps to follow:

1. Brainstorm terms to describe your invention, based on its purpose, composition, or use.

Write down a brief, but precise description of the invention. Don’t use generic terms such as “device”, “process,” or “system”. Consider synonyms for the terms you chose initially. Next, take note of important technical terms as well as keywords.

Use the questions below to help you identify keywords or concepts.

  • What is the purpose of the invention Is it a utilitarian device or an ornamental design?
  • Is invention a way to create something or perform a function? Is it a product?
  • What is the composition and function of the invention? What is the physical composition of the invention?
  • What’s the purpose of the invention
  • What are the technical terms and keywords used to describe an invention’s nature? A technical dictionary can help you locate the right terms.

2. These terms will allow you to search for relevant Cooperative Patent Classifications at Classification Search Tool. If you are unable to find the right classification for your invention, scan through the classification’s class Schemas (class schedules) and try again. If you don’t get any results from the Classification Text Search, you might consider substituting your words to describe your invention with synonyms.

3. Check the CPC Classification Definition for confirmation of the CPC classification you found. If the selected classification title has a blue box with a “D” at its left, the hyperlink will take you to a CPC classification description. CPC classification definitions will help you determine the applicable classification’s scope so that you can choose the most relevant. These definitions may also include search tips or other suggestions that could be helpful for further research.

4. The Patents Full-Text Database and the Image Database allow you to retrieve patent documents that include the CPC classification. By focusing on the abstracts and representative drawings, you can narrow down your search for the most relevant patent publications.

5. This selection of patent publications is the best to look at for any similarities to your invention. Pay attention to the claims and specification. Refer to the applicant and patent examiner for additional patents.

6. You can retrieve published patent applications that match the CPC classification you chose in Step 3. You can also use the same search strategy that you used in Step 4 to narrow your search results to only the most relevant patent applications by reviewing the abstracts and representative drawings for each page. Next, examine all published patent applications carefully, paying special attention to the claims, and other drawings.

7. You can search for additional US patent publications by keyword searching in AppFT or PatFT databases, as well as classification searching of patents not from the United States per below. Also, you can use web search engines to search non-patent literature disclosures about inventions. Here are some examples:

  • Add keywords to your search. Keyword searches may turn up documents that are not well-categorized or have missed classifications during Step 2. For example, US patent examiners often supplement their classification searches with keyword searches. Think about the use of technical engineering terminology rather than everyday words.
  • Search for foreign patents using the CPC classification. Then, re-run the search using international patent office search engines such as Espacenet, the European Patent Office’s worldwide patent publication database of over 130 million patent publications. Other national databases include:
  • Search non-patent literature. Inventions can be made public in many non-patent publications. It is recommended that you search journals, books, websites, technical catalogs, conference proceedings, and other print and electronic publications.

To review your search, you can hire a registered patent attorney to assist. A preliminary search will help one better prepare to talk about their invention and other related inventions with a professional patent attorney. In addition, the attorney will not spend too much time or money on patenting basics.

Download patent guide file – Click here