Artificial Intelligence – Stephen Milton, MooveAi

Abstract for “Vehicle-data analytics”

“Provided” is a system that can adjust vehicle operations using machine learning systems from multiple computing layers.

Background for “Vehicle-data analytics”

“1. Field”

“The present disclosure is general in nature and more specifically to computing and vehicle data analytics.”

“2. “2.

“Traditionally, people who analyze large amounts of data related user geolocations have used data from mobile devices. This includes intermittently reported geolocations. These systems are not designed to handle geolocation data from vehicles or automobiles. These vehicles could have a range of electronic devices including sensors, geolocation devices and transmitters that can be placed in many vehicles. These vehicles can be remotely controlled, semi-autonomous, or fully autonomous. These vehicles can provide high-accuracy data with high dimensions and high bandwidth that exceeds traditional analysis. These data include geolocation that is centimeter accurate and provided every 1 to 500 microseconds. These geolocation data can be combined with high-dimensional data like video feeds and light detection and range (LIDAR), ultrasonic depth sensors and on-board vehicle diagnostics sensors.

“The frequency and dimensionality of geolocation data as well as associated sensor data from vehicles can be used to provide insights and actionable data to different stakeholders in the transportation community and infrastructure community. The existing infrastructure is not equipped to handle the high-frequency, high-dimensional data from modern vehicles. This is partly due to the inability of the existing infrastructure to handle the volume and variety of data coming from multiple vehicles. The problem cannot be solved by simply adding more computing power. Traditional analysis is difficult because of the frequency, sparseness and multi-dimensional nature vehicle sensor data. This makes it more difficult to perform large scale analysis and lowers confidence in analysis results. This problem is made worse by the fact that multiple vehicles might report different sets of sensor measurements using different calibrations or standards.

“The following is an incomplete listing of aspects of the current techniques. These and other aspects will be described in the disclosure.

“Some aspects include the processing and transferring of sensor data from vehicles across various computing levels in order to perform machine-learning operations that compute adjustments for vehicle control systems.”

Some aspects of the invention include: obtaining a initial set geolocations and a preliminary set control system data. The first driver includes one or two processors that execute a vehicle application as part of a vehicle computing layers. The first set includes data indicating the use of a control device. The first set also includes data indicating the use of a control system. A top-view top-view computer application executes on the top view computing level and receives the second set and third sets of geolocations from a region.

“Some aspects include a tangible non-transitory machine-readable medium storing instructions. When executed by a data processor apparatus, the data processing apparatus performs operations including the above-mentioned processes.”

“Some aspects include: one or more processors; memory storing instructions that, when executed by processors, cause processors to execute the operations described above.

“To address the problems discussed herein, inventors had both to invent solutions and recognize problems not seen or foreseen by others in data analytics. The inventors want to stress the difficulty in recognizing problems that are still emerging and will be more obvious in the future if industry trends continue as they expect. It is important to understand that different problems can be addressed and that not all embodiments will address every problem in the traditional systems or offer every benefit. These improvements can be used to solve different problems.

“Some embodiments may use a federated machine learning architecture to support active learning to infer things regarding vehicles, drivers and places based upon relatively high-bandwidth road-side and on-board sensor feeds. Some embodiments may place computing resources in hierarchical order, e.g. at the vehicle (or subsystem, like the braking system), level, at neighborhood level, and at supraregional levels. Lower-level computing resources may apply lossy compression to lower-level data using machine-learning techniques, before reporting upwards as inputs to higher levels of the architecture. There are several machine-learning methods that can be used to improve machine learning, such as unsupervised machine-learning, reinforcement learning, and supervised machine-learning. Some embodiments combine multiple models at different layers. In some cases, optimization of model parameters according to an object function only within a model’s context, while in others optimizing model parameter across models as well.

The resulting set of machine-learning models can be used for different purposes. Vehicle manufacturers and fleet operators might configure different adjustable attributes to make vehicles more appropriate for their users, whether they are located in a particular place, time, or combination thereof. Sometimes, government agencies can take action to reconfigure roads and other infrastructure in response to machine-learning outputs, especially at the higher levels of hierarchy. Fleet operators, such as those involved in trucking, ride-share platforms or delivery services, may adjust the routing of their fleet to accommodate geographic changes based on outputs from the trained models. Some embodiments may allow for the enrichment of geographic information systems to encode road segments, intersections and other points of interest as inferred from machine-learning models. Parts makers, such as tier 1 ones, can adjust the design or configuration of parts according to the outputs from the machine-learning model. Some insurance companies and road-side service providers might customize their offerings for fleet operators and consumers based on the outputs of the trained model.

“FIG. “FIG. Some embodiments may allow the computing environment 100 to be used to address some of these problems. The computing environment 100, for example, may be designed to address various problems associated with the processing of vehicle data using computer systems, such as vehicle geolocation histories and vehicle sensor data streams. Multi-layer hardware and software infrastructure may be used in some embodiments to create a multilayer vehicle-learning infrastructure. This infrastructure can receive vehicle sensor data, push control-system adjustments, or other adjustment values, based on machine learning operations that include sensor data from multiple vehicles and operator metrics stored within vehicle operator profiles. These vehicles can communicate with an application running on a local computing layer, a top-view computing level or cloud-based computing system via wireless base stations or cellular towers. Wireless base stations can be wired to a data center, or another computing device in a local computing layer. The wireless base station could also be wired with a cloud-based computing service or application operating on top-view computing layers. The base stations could be used as access points for Wi-Fi or cellular networks. Networks for collecting sensor data from vehicles within wireless range of respective base stations (e.g. in wireless ad-hoc or centrally managed Mesh networks). In some embodiments, the data centre (or another computing device) that computes results from sensor data from one or more vehicles can communicate additional results to a central computing system or additional centers as part of a cluster.

“In some cases, multilayer vehicle learning infrastructures may be used to adjust vehicle operations using vehicle sensor data and geolocation data. Multilayer vehicle learning infrastructures may in some instances ingest streamed data and apply different machines-learning operations based upon a particular computing layer. Results or application states can be communicated in different degrees between different instances and among different computing layers. Multilayer vehicle learning infrastructure might include execution of various types of machine-learning and other computations at a vehicle computing level, a local computing level, and a top view computing layer that is adapted to each layer’s bandwidth and resources.

A suite of vehicle sensors 104 may be included in a vehicle that is traveling on a road 102. A vehicle’s sensors 104 can feed diverse sensor data to a vehicle learning infrastructure. This infrastructure may be able to infer different attributes about the vehicle, its operators or the places it travels. Large amounts of data may include more than 50 Megabytes or minutes, more than 100 Megabytes or mile, and more than 1 Gigabyte or mile. In some cases, sensor data can be simultaneously obtained. An assortment of sensor information can be obtained simultaneously. This includes more than five types of sensor, more than ten types of sensors, and more than 50 types. These attributes can be combined to create vehicle profiles, operator profiles or location profiles. Each profile can then be used to calculate vehicle adjustment values. Sometimes, profiles can be embedded in machine-learning model parameters. These attributes can also be combined to create risk indicators for locations, vehicle types or operator types. Vehicle adjustment values can then be pushed to vehicles when one or more risk levels exceed or meet a threshold.

“The vehicle sensors 104 can collect a variety of data including a vehicle proximity, vehicle control system data, motion-based data and geolocation. LIDAR, visual simultaneous location and mapping, radar readings or any other depth-sensor data can be used to determine the proximity of objects within a vehicle’s range. Vehicle proximity data may be used to determine the distance of objects in some cases. This could include a classification of the object from an ontology that includes cars, trucks and motorcycles as well as pedestrians, cars, trucks and bikes. An object orientation ontology may be used to classify the position or orientation of one or more objects relative the vehicle, as well as the movement and acceleration vectors.

“Control-system data can include data about components that are directly controlled and monitored by a vehicle operator. Any component that controls one or more elements of the vehicle’s control system can be included in the control system. A control system could include components like a steering wheel and brake pedals, accelerator pedals, turn signal switches, light switches, and the like. An operator can interact with the virtual elements to control the operation of the vehicle. A control system may include elements that are displayed on a screen, or augmented/virtual realities glasses. Data from the control system may include information such as steering wheel angular velocity and cruise control speed. Adaptive cruise control behavior selection can be made. Applied braking force can also be used to determine if a turn signal has been activated.

“Motion-based sensor information may include any data that represents physical motion of a vehicle, or of a component of the car, and can be captured using a motion sensor. Devices such as an accelerometer or gyroscope sensor can be used to measure motion. The data from motion-based sensors may include vehicle velocity and acceleration, vehicle turn radius, vehicle impulse, wheel slippage, vehicle speed, and vehicle velocity. Motion-based sensor data may include, for example, an acceleration profile which includes an assortment of acceleration readings taken by an accelerometer over a period.

“Vehicle operation data” may contain data about the operational status of a vehicle or one component or set of vehicles. A variety of sensors can be used to acquire vehicle operations data, including the ones described above. Another way to acquire vehicle operations data is with a temperature sensor or chemical sensor, resistivity sensors, and others. The vehicle might include data like the temperature, fuel-air ratio, oxygen concentration in exhaust, rotations per minute, tire pressure, battery status for electrics, etc.

“For example, an operator may activate a turn signal after seeing an anomalous object 110 and rotate a steering column. The brake pedal is pressed to slow down the vehicle. This example could include data such as the angular rotation (e.g. Force in Newtons (over time) on the brake pedal when decelerating. This example could include data such as vehicle speed or vehicle radius during deceleration. Data related to vehicle operations may include data such as the loss of electric power due to the turn signal or an engine temperature measurement.

“Vehicle sensors are capable of sensing other information, such as vehicle condition, vehicle operator statuses, and distances to objects within the vehicle’s surroundings. Some channels of sensor data can be represented as a point cloud made up of vectors, each with an origin that corresponds to the vehicle’s position. LIDAR radar and ultrasonic sensors can all be used as vehicle sensors. Some embodiments may also include LIDAR, radar, or ultrasonic sensors. Some optical cameras can include stereoscopic arrays or other spatially arranged arrangements of cameras. One or more optical cameras can provide sensor data to one or more processes in order to determine the geometry of the vehicle’s surroundings based on variation in coloration and focus, parallax or projected structured light location, occlusion or the like.

“Some embodiments could include satellite navigation sensors such a global positioning system (GPS), Global Navigation Satellite System(GLONASS), sensors for Galileo, and other on-board sensors. One or more geolocations. Some embodiments may also include radios that can sense beacons from terrestrial transmitters like cell towers and WiFi. Access points and location sensors may be used to triangulate the vehicle’s position using such signals.

“Some embodiments include one or more temperature sensors (e.g. for air intake and exhaust), wind speed sensors and road roughness sensors. They can also be used to detect distances to other vehicles. Light intensity sensors and inertia sensor are some examples. Others include sensors that perform dead reckoning using measured wheel rotations and compasses. Sensors that inform OBD-II about the state of their diagnostic system are another example.

“The vehicle 102 may have an onboard computing device (106), which can perform computations on sensor data. The computing device may be distributed on the vehicle in some cases. For example, it could include a group of processors that operate on data from different sub-systems like the braking subsystem, transmission subsystem, vehicle entertainment subsystem, vehicle navigation subsystem, suspension subsystem, and others. Onboard computing device (106) may use the computation results for one or more tasks. It can store or transmit the results to another layer. Operator profile and vehicle profile may be any record or group of records that represent dynamic and static information about the vehicle and/or operator. A vehicle profile can include records that indicate miles traveled, road conditions encountered, driver behavior, and so on. A vehicle profile may store information such as sudden swerves, objects within 10 cm of the vehicle, and decelerations in an events log. An operator profile can also contain information about the vehicle operator’s past events with regard to one or more vehicles. One or more processors may be used to execute as part of the vehicle computing layer in some embodiments. An example of this is an operator profile log, which may be stored in an array of strings. Each entry may contain a vehicle identifier, timestamp, and other relevant operational events, such as “vehicle IDLVICCRMND04; 202005-05 12.00; drove 3.5 hours without incident.

The sensor data may include raw sensor data, such as analog or un-transformed digital data. The sensor data could include, for example, a list of distances that can be detected by a LIDAR array and a steering wheels angular velocity. You can also include derived data in the sensor data. This is where derived data is calculated based on other sensor information. The derived sensor data may be stored as a numerical array or unsorted array. Alternately, or in addition to this, the derived sensor information can be converted into an ordered or more structured form, such a timestamped GPS geolocation. A timestamped geolocation may include an array. The first entry represents latitude. The second entry represents longitude. The third entry represents the time of recording. Finally, the fourth entry represents the confidence radius that is centered around latitude and longitude. This confidence radius could be calculated based on the GPS sensor’s confidence.

“Some embodiments may be able to receive derived sensor data that has been determined by one or more sensor-level processing units. A vehicle sensor could include an IMU (e.g. a three- or six-axis IMU), which may provide a numeric representation of a measured force and a classification value that indicates that the measured force meets a threshold value that indicates that the force is indicative that a collision is imminent. Some embodiments can also generate derived sensor information based on previously-derived or unprocessed data.

“The vehicle agent 108 can be executed by the onboard computing device. The vehicle agent108 may be an app and can listen for sensor data. Agent 108 can act as a listener and wait for one or more events to occur. Then, it may perform a set task in response. The event could include the receipt of unprocessed or processed sensor data. In response, the agent 108 can store or perform computations. The agent 108 can also poll other applications, devices or sensors to obtain sensor data. The vehicle agent 110 may perform data compression, feature, noise, privacy filtering and machine-learning training operations. The distributed vehicular data processing suite may include the agent 108, as described further below. A dynamic, self-updating set of algorithmic algorithms for learning models may be part of the vehicular data processing suite. It may also partially execute on the onboard computing devices 106. As described below, portions of the vehicular information processing suite can also be executed on a top-view computing layer or local computing layer.

“In some embodiments, an onboard computing device (106) may receive multiple channels of sensor data from a plurality sensors. For example, more then 10, more than 20 and, in most commercially relevant cases, more that 50. Each channel may contain one or more streams that can vary over time to encode different endogenous and exogenous aspects of vehicle or environment. The device 106 can associate the values with each other based on when they were received, or timestamps. Some embodiments allow for the grouping of sensor values into temporal buckets. For example, values may be grouped based on the most current state every 10 ms or every 50 ms or every 500 ms or more often. One or more machine learning models may be executed by the onboard computing devices 106 to produce one or more channels of output based on different subsets.

“Computations, and related activities, may be performed by the vehicle agent108. The vehicle agent108 can be part the vehicle computing layer in a multilayer vehicular learning infrastructure. Although certain tasks and algorithms are described as being performed on an agent running at the vehicle computing layer of a multilayer vehicle-learning infrastructure, other embodiments can perform these tasks and others at other layers. This disclosure describes an embodiment that employs an autoencoder to reduce sensor data’s feature space. However, other disclosures could use an autoencoder to reduce data sent from the local computing layers. The vehicle agent 108 can perform one or more tasks related to processing sensor information, analyzing sensor data, and taking action in response.

“Some embodiments may use lossless data compression methods like Run-length encryption, Lempel Ziv compression methods and Huffman Coding methods. One or more processors attached at a vehicle could apply the LempelZiv 77 algorithm for lossless compression of geolocation information by scanning the data with a sliding windows. One or more processors attached may also implement lossy data compression methods such as transform coding, discrete cosine transformations, discrete waveslet transforms or fractal compression.

“Some embodiments may employ one or more agents running on the vehicle computing layer in order to perform a machine learning operation. The machine-learning operation in some embodiments may include a training operation that uses sensor data to minimize an objective function. Some embodiments may employ a machine learning operation that predicts the response profile to a braking event upon detection of an object, or arrival at a specific geolocation. batch gradient descent, stochastic gradient descent, etc.) To minimize an objective function. Another example is a machine learning operation that predicts the most risky or vulnerable part of a vehicle’s operational performance based on data from sensors such as motion-based sensors and internal engine temperature and pressure sensors.

“As explained below, the results of a machine learning operation may include either the output of the operation or the state values of that operation. A machine-learning result could include the predicted acceleration pattern, the perceptron weights and biases that were used in the neural network. The machine-learning results can be used to initiate, stop, or modify vehicle operations. The vehicle agent 108 might use LIDAR sensor data in order to detect an anomalous object 110. Based on that classification, it may play a visual or auditory warning message. The machine-learning results can also be sent to a local computing layer or a data center. These machine-learning results can also be compressed, reduced or encrypted before being transmitted. Some cases require that the training be on-board. In other cases, some embodiments can offload training, while others may obtain model parameters through transfer learning.

“Some embodiments can generate or update a car profile or operator profile. These profiles may include vehicle data, or operator data that is equal or based on machine learning results. The process of updating a vehicle’s profile can include adding vehicle data to an existing profile, merging vehicle profiles, deleting vehicle profiles, and creating a new profile. The same goes for updating an operator’s profile. This could include creating a new profile, adding information to existing profiles, merging two profiles to create a single operator profile or deleting an operator account.

“In certain embodiments, a vehicle profile and associated operator profile could indicate an increased risk for accident when the vehicle is traveling on a road or road type that is associated with dangerous conditions. A vehicle profile can be sensitive to certain vehicle operators. This may lead to different risk values depending on the type of operator using the vehicle. These risk values can be normalized using the distribution of values for the population at the location. Some embodiments might include an operator profile that reports that a vehicle’s driver turns the vehicle faster than 95% or brakes quicker than 80% of other vehicles around a corner.

“In some embodiments, a vehicle profile or an operator profile could include a set associated geolocations that form a path for travel. The terminal ends of this path may be associated with additional data. As the vehicle 102 moves down the road 160, terminal ends along the route 160 could indicate that the operator has an online account. Some embodiments may cluster or up-weight geolocations associated with the beginning and end of trips in certain computations. This is to filter out non-terminal points. Some embodiments can detect terminal ends when a vehicle sensor detects less than a certain speed or less movement over a specified time period.

“In certain embodiments, data from a set sensors attached to a vehicle may be used for information about another vehicle. The second vehicle may then be stored with its vehicle profile. The vehicle sensors 104 can capture images, videos, and audio recordings of second vehicles 192. The vehicle agent (108) may analyze one or more recordings and create a second profile for the second car 192. It can also store the analysis results in the second profile. A second vehicle profile can include one or more features such as vehicle shape, vehicle color or license plate. The vehicle agent 110 can store the second profile in a nontransitory, computer-readable medium on the onboard computing device (106), or transmit it to a base station 120. The vehicle profiles of multiple vehicles may be combined to create a single profile for the same vehicle. This is true even if the detected vehicle does not transmit data or has no sensor outputs.

“In some instances, one or more vehicles’ onboard computing devices may form part of the vehicle computing layer in a multilayer vehicle-learning infrastructure. An agent can perform a training activity, and then send the results to another infrastructure layer in a federated learning architecture. Each of the computing devices that are attached to a vehicle may form part of an FL population. This allows them to perform training operations or other machine-learning operations. The FL round may allow for the transmission of training weights, results or other information from one or more agents to another layer without having to expose the training data to other agents on the same layer. These values may be used to increase the accuracy and speed of vehicle computing layer agent computations or provide instructions for controlling-system adjustments. A FL method is able to improve vehicle privacy and provide instructions for control-system adjustments. It also provides high predictive power for any machine-learning model.

“The vehicle 102 has a wireless network interface (105) that allows the vehicle to send and receive data via the base station 120. This is called a wireless data session.109. The sent data can include the raw sensor data from the vehicle sensors 104, as well as the analysis results generated by the vehicle agents 108. Data may be compressed, encrypted, or feature-reduced. The servers can be hosted by manufacturers of the vehicle, suppliers of equipment used within the vehicle, and other third parties in some instances. The radio can be a radio that transmits data between the vehicle and cellular base stations in some embodiments. The radio may include a satellite radio that can communicate with various satellite networks, either low-earth orbit or high orbit.

“Roadside sensor 134 is shown attached to a roadway device 130. Any sensor that is located close enough to one road to capture an auditory, visual or other feature (e.g. sensing cars on the roads) may be called a roadside sensor. Some embodiments include roadside sensors that provide roadside sensor data to further improve the predictive ability of multilayer vehicle learning systems. Roadside sensors may collect roadside sensor data. This may include temperature, pressure, humidity, still images and audio streams. The data from the roadside sensors may be used in some instances to verify the data collected from individual vehicles within the vehicle computing layer. For example, vehicle 102. These data can be used to validate sensor data transmitted by vehicles by computing devices for modeling validation.

“In certain embodiments, the roadside devices 130 can capture images, videos, or sounds of or associated with vehicles or objects. The roadside sensors 134 can capture images, videos, or sounds associated with both vehicle 102, and anomalous object 110. A roadside agent 138 can use roadside sensor data to execute machine-learning operations on one or more processors in a roadside computing device. The roadside agent 138 might classify moving objects in a way that distinguishes them from pedestrians, vehicles, or wildlife. Based on a video stream. Some embodiments may also correlate geolocation data from vehicles with known locations of roadside devices to aid in classification of events that correspond with unusual vehicle behavior. The roadside agent 138 could analyze the video stream captured by roadside sensors 134 and vehicle geolocation data from the vehicle 102 in order to identify the anomalous object 110.

“In some instances, the vehicle computing device (106) may first label the anomalous objects 110 and then the local computing center 122 may label them 110 based upon a visual recording taken by roadside sensors 134. Local computing data center 122 may recognize that the geolocations recorded by the roadside sensors are similar to the vehicle’s 102 geolocations and can analyze the detected features. The application can compare features based on their characterized shapes, estimated sizes, colors, and other categorical or quantifiable values. In some cases, the application can determine that the features are physically linked (e.g. .).”parts of a larger component, different views, etc

“The roadside device 130 can send data to a local computing level using methods similar as those used by vehicle 102. The roadside device 130, for example, can transmit roadside data and/or results based on roadside data to base station 120 via wireless data session 139. Some embodiments allow the roadside 130 to send and receive data via a wired communications network from the local computing center 122. Alternately, or additionally, the roadside devices may transmit roadside data 140 to a cloud computing app 140 in certain embodiments. The cloud computing application 140 is part the top-view computing layer.

“The vehicle computing layer data may be transmitted to a local program executing at the local computing data centre 122 of the local computing layer via base station 120. In some cases, the local application may include instructions to run a local computing agent 128 on the local computing center 122. The data may be analysed by the local computing agent 128. Some embodiments transmit at most a portion the sensor data via a standard interface like Sensoris. Alternately, or in combination with other interface standards, some embodiments can transmit at most a portion of the sensor information using Sensoris. You can send the results from the vehicle computing level to the local computing layers using various wireless protocols and devices, as described further below.

“In some embodiments, a data center 122 may acquire a first visual recording of the second car 192 from the vehicle sensors 104. Then, it can perform a machine learning operation on that first visual record to identify or record other features of the second car 192. OCR may be used in some embodiments to identify car shapes and colors. These features can then be used to map the vehicle profile. Some embodiments may acquire a second visual record to the second vehicle (192), such as one captured using roadside sensors 134. In this case, another machine-learning operation can be used to capture the features and map them to a fourth profile. A local application running on the data centre 122 might recognize similarities in the geolocations of the vehicle during capture and compare the third profile to the fourth vehicle profile using their mapped features. The local application running on the data centre 122 might determine that the third and fourth vehicle profiles correspond to the same second vehicle, 192 using similar or identical mapped attributes and merge these profiles. A vehicle profile that corresponds to the second vehicle may be marked as stolen or dangerous in some embodiments. A warning sign may be placed in the interior of the first car 102 to warn occupants about the risk associated with the second car 192.

Some embodiments can process large amounts of data, which may result in a high processing speed, memory complexity and processing complexity. Sometimes, there may be more vehicles reporting data than 100, 1000, 10 million, or 100 million to a data centre in a local computing layers or top-view applications in a top view computing layer. Some embodiments may have more than one, five, ten, or fifty sensors per vehicle. Some embodiments may have more than five, ten, or twenty different types of sensors on each vehicle. In some cases, multiple instances of the same sensor can be used to monitor the vehicle and its environment. Some embodiments may have heterogeneous vehicles within a vehicle computing layer. A vehicle computing layer could contain more than 10, 20, 50, or more types of vehicles. It may also include vehicles from more than two, five, or 10 model years. Alternately, vehicles could include more than 5% or 10% or 20% of the US public’s vehicle fleet. Some cases may see vehicles traveling over larger areas than 100 km or 10,000 km while reporting data to multilayer vehicle learning systems. This could be a problem in some countries. Some embodiments use concurrent processing frameworks such as Apache Spark? or Apache Storm? to process large amounts of data. Some embodiments can also use data warehouse platforms like Google BigQuery? or DataTorrent RTS? or Apache Hydra?.

“Some embodiments may use deep learning systems, such as deep neural network, deep belief networks or convolutional neural networks (CNN), mixed neural networks, and other deep learning systems. Deep learning systems can be used for various purposes, including adjusting vehicles in vehicle computing layers or transmitting data to other computing devices at the top-view computing level or local computing layer. These results may include a quantitative result or a category label, cluster, or any other outputs from a neural network’s output layer. These results may also include parameters and weights used in the neural network’s training operations, or weights generated during training.

Deep learning systems can be used in vehicle computing devices within a vehicle computing level, but the computing layers described here may decrease the benefits of computing expensive operations on devices at that layer. The local computing data center (122) may have more computing resources than the onboard computing device (106). This may allow the local computing layer to produce faster and more precise computations. The local computing layer might include control-system adjustments values that can be pushed back into the vehicles. These control-system adjustment value can be used to adjust the vehicle’s physical response to user control and automated control. The control-system adjustment value can be used to modify the vehicle’s response to any operator-controlled action, such as turning the steering wheel, activating the headlight or turn signal, activating the cruise control system, using the antilock brake system (ABS), and stepping on the accelerator. These values include vehicle braking responsiveness and vehicle acceleration responsiveness. They can also be used to adjust the steering wheel responsiveness when operating under cruise control or operator-controlled driving. This may prove useful in reducing accidents or other harmful incidents.

“The local computing agency 128 may update the vehicle profile for the vehicle102 using computations made by the local computing authority 128 which take into account data from other vehicles. The local computing agent 128 might update a vehicle profile to add a tag that shows that the vehicle 102 drives an average of 10% faster along the road 160, compared to the median speed of all other vehicles along the road 160. A similar update may be made to an operator profile. Some embodiments allow access to the attributes of locations where a vehicle travels and can be used to update its vehicle profile. A vehicle profile might indicate that there is more accumulated degradation when the vehicle 102 travels through an area with rough roads. This can be based on sensor data from multiple vehicles. Another example is that the vehicle profile of vehicle 102 might be updated to increase its wear value after it was determined that vehicle 102 has traveled to a location with known substances that can corrode vehicle components (e.g. salt, water-salt mixtures, acids, etc.).”

“In some cases, vehicle profiles can be accessed to determine insurance risk and schedule preventative maintenance. Some embodiments allow self-driving algorithms to be modified to reflect vehicle profiles. For example, self-driving algorithm may be programmed to reduce aggressive acceleration, cornering, deceleration or response to wear on the car or to higher road risk, such as at intersections, corners, exits, on-ramps or intersections. The multilayer vehicle learning platform may also perform machine-learning operations to assess the riskiness and other performance of autonomous driving systems.

“Results of the local computing layer computed with the local computing data centre 122 may be sent to a top-view cloud computing application 140. The top-view layer may have one or more processors that execute the top-view computer agent 148. In this case, top-view agent 148 could be a top view computing application that acts as an overriding agent and communicates directly with local computing agents and vehicle agents without having to use intermediary communication between layers. The tasks and algorithms described above may be performed by the top-view computing agents 148. The top-view computing agents 148 can also perform region-wide analysis using data from local computing layers. The top-view computing agents 148 can predict outcomes probabilities for whole regions of vehicle operators or entire vehicles populations. The top-view computing agents 148 can be used to determine the risk levels for specific regions, vehicle types, and operator profiles. These risk values can be used to correlate with specific incidents or vehicle designs in order to detect infrastructure optimization and vehicle design optimization.

“Some embodiments can be set up to create a location profile using data from individual vehicles from either the local computing layer or vehicle computing layer. One or more top view computing agents may be used to execute machine-learning operations based on data from multiple vehicles and data centres. The top-view computing agents 148 might classify road risks based on a collection of vehicle profiles and event logs. The top-view computing agents 148 can also determine whether a road is a school area and is congested between school hours. This is done using a trained neural network that uses vehicle speed histories and image data as inputs. The top-view computing agents 148 can raise a road risk value for roads that are in congested areas during school hours.

“Some embodiments may link a behavior, profile of an operator, or vehicle profile to a risk value. A top-view computing agent 148 might train or use machine-learning systems to determine the relationship between accident risk and vehicle speed. Some embodiments can also determine the risk value associated to a vehicle’s starting point, destination, operator, type of vehicle, specific vehicle or type. Some embodiments may assign a risk to a specific vehicle driven by a particular vehicle, based on its predicted destination. Alternately, or in addition to this, certain embodiments can determine or update a specific risk value based upon two-part relationships between vehicle operators and vehicles. Alternately, or in conjunction with this, certain embodiments can determine or update a danger value based upon three-part relationships between vehicle operator and a vehicle. These relationships are determined from a portion of a road network graph.

“In some embodiments, one of more processors from the top-view computing layers may track, merge, or associate a plurality vehicle profiles and operator profiles. Some embodiments allow for a greater number of passengers to be carried by vehicles than the number of vehicles. Vehicle operators and passengers other than the vehicle operator may also be able to occupy different vehicles at different times. An agent that executes on the top-view computing layer or local computing layer in some embodiments may calculate a probability associated to each vehicle and user pair.

“FIG. 2. illustrates a computing ecosystem 200 that may include various learning infrastructures in accordance to the present techniques. The computing environment 200 may include some or all the techniques described above. The vehicles 211-213 each have an onboard computer 215-217, a set of sensors 221-223 and wireless network interface controllers 225-227. One of the onboard computers 215-217 can process data collected by the sensors 221-223 and transmit it wirelessly to a vehicle analytics system 250. Third-party vehicle data providers 231-233 can also provide additional data that corresponds to one or more vehicles 211-213. A user may have access to network resources 245-247 when they are occupying one or more vehicles 211-213. Data derived from these network resources 245-247 can also be used for inputs into machine-learning operations by the vehicle data analysis system 250.

“Some embodiments could include methods to link operator records with user profiles, or other information from an online source. Some embodiments allow vehicle occupants or operators to carry various networked computing devices near or inside their vehicle. FIG. FIG. 2 shows that a networked computing device could be any of the mobile computers 241-243. Each of these mobile computing devices may correspond to an operator or passenger in one of the vehicles 211-213. A mobile computing device may include a smartphone, tablet, laptop computer, and the like. One example of a networked computing device is a wearable device such as a smartwatch, a head-mounted monitor, or a fitness tracker. These computing devices can include processors, memory and an OS in some cases. Some embodiments may also use data from other networked computing devices to aid in machine learning operations.

“In some instances, the computing environment 200 may include the vehicle data analysis system 250 that can receive data from any of these components. The vehicle data analysis system 250 can be executed on one server of a local computing layers, or on multiple servers of a distributed computing network as part of the top-view computation layer. Some embodiments may also include a vehicle data analytics 250 that may be used to store and determine attributes of vehicle operators, passengers, and places visited. FIG. 2 shows that the vehicle data analytics system 250 may include a user profile repository 251, a vehicle profile repository 252 and a geographic information system (GIS) repository 253. FIG. 2 shows that the vehicle data analytics system 250 could include a user profile repository 251, vehicle profile repository 252 or a geographic information systems (GIS) repository 253. The user profile repository 251 can store attributes about users, such as vehicle passengers or vehicle operators. The user profile repository 251 could store operator profiles, which may include average driving speeds. Vehicle profile repository 252 can store attributes about vehicles. The repository 251 might store one or more vehicle profiles, which may include a list of near-collisions and collisions for each vehicle. The geographic information system (GIS), repository 253, may store one or several road network graphs. It can also include attributes of roads, specific vehicle paths, and other relevant information. The GIS repository 253 might store traffic information and a road graph corresponding to specific road segments on the road graph.

“FIG. “FIG. To compute vehicle results in accordance to some embodiments, use 1 or 2. 300 shows operations that use a multilayer vehicle-learning infrastructure to adjust vehicle operations and affect vehicle performance. The multilayer vehicle learning infrastructure might execute one or more routines within the computing environments 100 and 200 in some embodiments. Some embodiments allow the multilayer vehicle learning infrastructure to execute one or more routines in the computing environments 100 and 200. Some embodiments allow the operation of process 300 to be accomplished by running program code stored on one or more instances a machine-readable, non-transitory medium. This may in some cases include different instructions being stored on different physical embodiments of medium and then executing the different subsets using different processors. herein.”

“In some embodiments, 300 may include receiving sensor data from vehicle sensors as indicated by block 302. This may include data from the vehicle sensors 102 that gather information about the vehicle 102 as well as the environment around it 102. These sensors can collect various types of sensor information, including motion-based sensor data and control-system sensordata. One or more sensors may be attached to a vehicle and capture images, videos LIDAR measurements, audio measurements or airbag deployment. A vehicle sensor might be able to obtain vehicle geolocations and control-system data, such as steering and ABS positions, along with vehicle proximity data, in the form LIDAR data, every 50 milliseconds during a trip.

“Some embodiments can also refine sensor data to implement an active learning approach. After detecting a potentially dangerous behavior, some embodiments might display a query to the vehicle operator. A vehicle might display a query to the operator after detecting a swerve event. This could include a question about whether a collision was avoided or whether LIDAR systems provided appropriate warnings. A query response can be sent to a local program running on a local computing layer. The query response could be integrated into an active learning system for machine learning applications.

“Some embodiments may determine the derived sensor data from unprocessed sensor information using one or more processors in vehicle computing layer as indicated by block304. As the vehicle 102 in FIG. As the vehicle 102 of FIG. 1 moves along a route digital or numerical data may be collected from vehicle sensors and analyzed to determine the derived sensor information. One example is to use a LIDAR-adapted program that executes on one or more processors. This will receive a numerical array representing LIDAR readings, and create an event record that indicates that a human-shaped object was within the vehicle’s threshold distance. In some embodiments, any combination of sensor data and derived sensor information may be used to create additional derived sensors data.

Block 308 indicates that some embodiments may use one or more privacy functions in order to protect sensitive data. Sensitive data could include information about a vehicle operator, which may be used for identification purposes. Sensitive data can also include proprietary or confidential vehicle data, sensor data, or other data about a vehicle operator. Some embodiments allow for privacy operations to be applied, which may include the encrypting, modifiying, or removing certain types of data from sensors data, other data about vehicles, vehicle operators, passengers, and objects in the vehicle. One or more privacy functions may be called in some embodiments to increase the resistance of vehicles to being exploited by malicious entities.

Some privacy functions can delete identifiable data after being used. Privacy functions can erase data like audio recordings of vehicle occupants and video recordings of them, as well as images of vehicle passengers, biometrics such names, identification numbers, gender labels or age identifiers. The privacy function can be called to modify or delete the video recording, for example, if the operator is facing a windshield while driving.

“Some embodiments may inject noise in sensor data (or data derive or otherwise inferred from it), by using a noise injection mask and performing a differential privacy operation. Privacy-driven noise terms can alter sensor data, such as the acceleration log, destination coordinates, and vehicle operator appearance. An application may use the privacy-driven noise term to determine the value of a probability distribution. This means that the same probability distribution or defining characteristics can be used to compute on another layer than the vehicle computing layer. Examples of probability distributions include the Gaussian distribution or Laplace distribution, uniform distribution, and others. Some embodiments use noise injection to alter the sensor data, or to transform the sensor data results from an original set of values into a masked list of values. This can reduce the likelihood of tracking specific data back towards a specific vehicle or operator. Each vehicle that sends data may have a different amount of noise than the vehicles sending it. A FL method may also be used with noise injection operations, as described further below.

“Some embodiments may use machine-learning encryption methods. Example encryption methods may include partially homomorphic encryption methods based on cryptosystems such as Rivest-Shamir-Adleman (RSA), Paillier, ElGamal, and the like. Examples of encryption methods could also include fully homomorphic encryption techniques based on cryptosystems like Gentry, NTRU and Gentry-Sahai-Waters(GSW), FHEW or TFHE. One embodiment may use a fully homomorphic encryption technique to increase adaptability and robustness with machine-learning operations, as well as increase resistance to attacks by quantum computing systems.

Some embodiments can execute a smart-contract to receive a request for data about a vehicle or operator profile. Other embodiments might determine whether the request is authorized. Some embodiments can determine whether a request is authorized by checking whether it is associated (e.g. contains) a cryptographically signed amount that has been signed either by an individual that corresponds to the data or a delegate. Some embodiments can determine whether the cryptographic signature is associated with the individual or the delegate using a public cryptographic keys of the individual or delegate. Once the keys are confirmed to be compatible, some embodiments can send a record that allows access to the requested data. This may include a cryptographic key that may allow the data to be decrypted, or the data as it is. Some embodiments may include a cryptographic key sent or data encrypted with the public cryptographic keys of the requestor. This allows the requester to access the values in ciphertext using their private cryptographic keys. Multiparty signatures may be required for some embodiments in order to gain access to data. Some smart contracts, for example, may require both the person described in the data and the provider of the data (such as a fleet owner or car maker) to approve a request to gain access to secured data.

“Some embodiments could include performing a vehicle-layer machine-learning operation executed on a vehicle computing layers to generate vehicle computing layer results based upon the sensor data. Block 312 indicates this. Some embodiments may use an agent to execute on the vehicle computing layer’s processors. The agent may implement a machine-learning method to perform dimension/feature-reducing tasks such as by using a neural network in the form of an autoencoder, wherein an autoencoder is a neural network system comprising an encoder layer and a decoder layer. The number of nodes in the input layer of an autoencoder is equal to the number of nodes in the output layer. One or more processors can be used to train the agent’s autoencoder on unprocessed LIDAR information. The trained autoencoder can convert the unprocessed LIDAR dataset into a form that is easier to understand and requires less data. These reduced parameters may be used to create an efficient ontology from sensor data by some embodiments. Some embodiments recognize that an autoencoder reduces data from 20 sensors so that 95% of the values are from five sensors. Some embodiments provide instructions for transmitting only data from the 5 sensors. Additional sensors can be attached to vehicles to provide additional sensor information. The agent may then use the additional sensor data to generate results. The agent may also perform additional training operations to retrain machine-learning parameters and weights after additional sensor data has been provided.

“In some embodiments, vehicle layer machine learning operation may include a training operation using sensor information, wherein the training operations can also include supervised learning operations based on an object function. The supervised learning operation may treat sensor data as input in order to minimize an objective function. An objective function is a sum of one or more outputs from a machine-learning program and known values in a training dataset. One example is to train a vehicle layer-machine-learning system that uses vehicle proximity data to predict user reaction times. This may be done using gradient descent to optimize the objective function. For example, the objective function may be the difference between measured reaction time and predicted reaction times generated by the machine-learning program. You can also make predictions using supervised learning, such as braking duration and maximum steering wheel angle. These predictions are made by vehicle operators during turns.

“Alternatively, or additionally, vehicle layer machine learning operation may include using either a trained or unsupervised machine?learning system to determine an output. Some embodiments use a trained neural network, for example, to predict the duration and arrival time of a braking event based on the detection of an object. Alternately, or in addition to the vehicle layer machine learning operation, an unsupervised machine learning system may also be used. A deep belief neural networks may be used to analyze sensor data, such as vehicle proximity and braking information. The deep belief neural networks may also include unsupervised networks, such as autoencoders and restricted Boltzmann machine. These outputs can then be divided into data clusters. Further operations include classification of these data clusters for expected braking events or unexpected braking events.

“Whether a vehicle-layer machine-learning operation can be classified as a training operation, or an operation to use trained systems may depend on whether the training mode has been activated. One embodiment allows for multiple machine-learning operation simultaneously. At least one of these machine-learning operation is a training operation, and at least one other is an operation to use the trained system.

“In some embodiments, the vehicle layer machine-learning operation may determine a sensor inconsistency based a machine-learning-driven analysis of sensor data. A neural network can be trained to run self-diagnostic tests using idle vehicle behavior and driving behavior in order to detect defective sensors that self-report not as defective. A driver might swerve in order to avoid an object that the LIDAR sensor failed to detect. A vehicle computing device might first detect a swerve event based on the steering wheel angular velocity. Then, it may use a machine-learning system that uses sensor information (e.g. To determine if a collision avoidance event has occurred, the application may use data from sensors (e.g. turn signal inputs and steering wheel velocity) to detect it. The driver may be able to confirm that the collision avoidance event occurred by asking the application. The vehicle application can perform various tasks once it has confirmed that the collision avoidance event took place. These include instructing other machine learning systems to retrain sensors without using LIDAR sensor input, warning a vehicle operator that the LIDAR device may be defective, and transmitting instructions to either a top-view computing layer or local computing layer to ignore LIDAR data.

“The outputs and state values of a machine learning operation may be included in the results of a machine learning operation, as described further below. The machine-learning results from a machine learning operation can include both the predicted acceleration pattern as well as the perceptron biases or weights used in the neural network. These machine-learning data can be used for further processing or sent to a local computing layer. One embodiment of the machine-learning operation includes training a neural network for a task. In this case, one or more training classes are generated based on an operator’s response to an active learning query. This machine-learning operation can be done in conjunction with the above-described autoencoder operation.

“In some instances, the vehicle computing device can determine a baseline region for a vehicle using data from its set of sensors. A specific set of values, weighted sum or weighted products may represent the baseline threshold region. The baseline threshold area may include a combination of a velocity range, a temperature range and a function dependent upon the velocity. The measurement space of sensors may define the baseline threshold region. This could be a normal, non-anomalous vehicle or sensor function. In some embodiments, it may be possible to determine if the baseline threshold region has been exceeded by abnormal sensor measurements. Some embodiments might find defective sensors after reviewing sensor history and using data triangulation. This allows for compensation for any defects or failures in one of the sensors. Some embodiments might determine that the engine temperature sensor has failed after exceeding a threshold area. Some embodiments will display a message to indicate faultiness or send instructions to the local computing layer, or directly to top-view computing layer, to delete data from the temperature sensors.

“In some embodiments, an agents may use machine-learning to analyze sensor data and produce results to adjust a vehicle’s control system. The results can also be transmitted to a base station or server operating at a data centre, cloud computing service, or serverless application. The agent might include algorithms that detect angular acceleration and classify the event as a swerve, not a change of lanes. An event record may be created by the agent to identify the category in which the angular acceleration occurred. This event can then be associated with an event time and a vehicle profile. The vehicle profile will include the event record in an events log. Another event record could include a pothole avoidance, a pothole strike, an event time, an event time, and a hard acceleration. The vehicle profile also includes the event log. The results of agents from individual vehicles can be combined with other features to create a road network graph. In this case, the feature can be represented as one or more indicators, values, or indicators that indicate a road status. Some or all of the results can be processed with an encryption method that is similar to those used for block 308

“In some embodiments the process 300 may include updating a vehicle or operator profile based upon the vehicle computing layer results. Block 316 is an example. One embodiment may update the vehicle’s profile by recording discrete events in an event log or intervals in an event stream. This could include a set timestamped events, other data, or any combination of these types of entries. One example is that some embodiments might record an entry in the event log with a label “bicycle”,? an angle value ?30,? A distance value of?0.15, A distance value of?0.15,? Some embodiments can record static values in a record profile such as vehicle identification number, engine type, vehicle type and the like. Some embodiments might record, for example, that a vehicle’s vehicle profile includes an electric vehicle.

“Some embodiments may update a vehicle operator profile based on their use of different vehicles and other information. Some embodiments may call one or more privacy functions in order to anonymize, remove noise from, or otherwise anonymize vehicle operator information. Some embodiments can generate an operator profile and the corresponding operator profile identifier in certain cases, such as when privacy functions are not used to identify a vehicle operator. Some embodiments may use driving patterns to determine the operator profile. One embodiment may associate more than one profile with a vehicle. Some embodiments may create and characterize an operator profile based upon operator metrics, such as average or maximum vehicle speed, average/range of vehicle acceleration, average/range of vehicle deceleration and average/range of vehicle quartering speeds. A second profile is created based on any detected differences in one or more operator metrics.

“Some embodiments may also merge or link different operator profiles, based on a mutual association to the same vehicle or vehicle profile. One embodiment may combine a first profile with another operator profile when they determine that the two profiles have the same vehicle in a vehicle’s history entry. They also share events recorded in their event log. Some or all profiles can be processed with an encryption method that is similar to or identical to block 308 described above.

“In some embodiments, 300 may include obtaining roadside sensor data using roadside sensors as indicated by block 318, This may include the use of roadside sensors 134 as shown in FIG. 1. To collect roadside data. Roadside sensors can be used to identify and track vehicles, provide information about roads, and provide additional data for road features. Similar to blocks 304 and 316 for vehicle sensor data, roadside sensors can be encrypted, reduced in dimension, and analyzed to provide additional data to local computing layers for machine-learning operations. One or more local computing layers may, for example, examine a visual record to identify and label roadside features.

Roadside sensors such as traffic cameras and microphones may report data from roadside sensors that could be associated with a vehicle. This may include a correlation of GPS, optical character recognition (OCR), of license plates, wireless beacons with identifiers transmitted and received by roadside sensors or other methods. Roadside sensors can detect vehicles with a pre-existing vehicle profile in some cases. Some embodiments can determine whether a vehicle detected on the road is identical to the vehicle with the pre-existing profile based on known geolocation data, visual data and audio data. Some embodiments might detect the signal from a wireless tire pressure gauge at a traffic light, and use OCR to determine the license plate sequence for that vehicle. As described further below, some embodiments may identify this signal as belonging the first vehicle or recognize a matching sequence of license plates. Some embodiments will update the vehicle profile in response to the signal. This allows them to track the presence of vehicles within close proximity of roadside sensors.

“In some embodiments, process 300 may include sending vehicle-layer or roadside sensor results, as indicated in block 320. Some embodiments allow for wireless communication to local data centers using wireless methods such as a wireless base station. Data can also be sent to local computing levels via a wired connection to the local computing data centre. Local layer machine learning operations can be performed by one or more processors that are accessible to the local computing layers. These operations, which may require more resource-intensive methods of machine-learning than the vehicle computing layer, may be done using local layer computing layers.

“One or more local computing device centers or other computing devices may ingest any type or sensor data, results from sensor data or other data from a plurality vehicle in the vehicle computing layers for machine-learning operations and other computations. The local computing layer devices may also ingest data from roadside sensors to perform machine-learning operations and other computations. A data center at the local computing layer might receive, for example, a set perceptron and machine-learning outputs and geolocation data from the first training operation at the vehicle computing layers and, for the second training operation at the vehicle computing layers, a set perceptron and machine-learning outputs and geolocation information for the second vehicle. Some embodiments allow vehicle-layer or other results to be sent directly to other layers, without or with the assistance of a local computing layer. As an example, vehicle-layer data may be sent directly without having to pass through the local computing layer.

Block 322 indicates that some embodiments can include the determination of a road network graph using GPS data or geolocations from vehicle sensors. One embodiment may use an application running on the local computing layer to determine and utilize a road network map. The road graph can include a set or segments of roads. The road network graph may be acquired by a data center at a local computing level by selecting it from an external road graph repository, such as an API for Here Maps?. Some embodiments, for example, may create a boundary region by linking the geolocations to locations within a range of 1 m. The boundary region can then be used to select nodes and links within the region to form the road graph.

“Alternatively or additionally, some embodiments may track vehicles’ geolocations and connect them to create a road network. One or more road network graphs can be created by combining externally-provided data with vehicle-provided GPS coordinates. Some embodiments may use parts of the road graph to create sub-regions that include a number of road network nodes and segments. This allows for more analysis of the specific routes in the road network traveled by one or several vehicles. Some embodiments may use the road network shapes, traffic amounts, lengths or other data to aid in the machine-learning operations. A road network graph may also be generated by any of the computing devices in the top-view computing layer or vehicle computing layer.

“Some embodiments may use one or more machine learning systems to classify the regions of the road network graph. Some embodiments may use or train a decision-learning tree, classification tree, classification neural net, and the like to classify sections of road in a road system during specific periods of time as high or low risk. Other classifiers can be used by some embodiments to classify road segments in a road graph. This could include a school zone that has a high prevalence of drunk drivers or is associated with weather events such as rain. These classifications can be linked to or included in a road network graph in some embodiments. A high-risk region may be designated or indicated in some embodiments as an additional location for a roadside device, such as a traffic light sensor, security sensor, roadside sensor, or roadside sensor.

“Some embodiments may allow for the clustering of classes of road segments in a road network graph. This is possible to account for different attributes such as inclement weather, collisions, construction occurrences and car failures. Some embodiments may also search the vehicle computing layer for vehicles to identify the on-incident vehicles that were involved in each of the clustered segments. Some embodiments may cluster classes of road segments across large geographical regions to identify specific events and attributes. This can help to reduce the problem of data sparsity for low-probability events like traffic collisions. One or more neural networks described elsewhere may be periodically retrained based upon one or more clusters of road segments or their corresponding on?incident vehicles. The local layer machine learning operation described in block 324 may also include instructions for performing an additional training operation if the number of on-incident cars increases for clusters that contain events such as wildlife collision events, fatal accidents, heavy rain, and the like. Some embodiments may also generate a similarity scoring for a region based on the road graph of that region. This may allow for comparisons of city traffic flow metrics like vehicle density, pedestrian density and safety, as well as street geometry and collision events.

“Some embodiments could include performing one or several local layer machine learning operations executing on the local computer layer based upon vehicle computing layer results and roadside sensor results in order to determine one or two local computing layers results, as indicated at block 324. Some embodiments may also include the use of an agent to execute on the local computing layer’s processors in order to conduct a machine learning training operation. Alternately, or in addition to using an agent to train a machine-learning system, the local layer can also use the agent to determine an output. These outputs may include, but are not limited to, a predicted operator, vehicle behavior, a labeled driver or vehicle activity as well as a risk value and control-system adjustment value. Local computing layer results may include either one or both of the machine-learning system state or output values. These may be used to perform further operations or transmitted onto other layers in multilayer vehicle learning infrastructure.

“Some local layer machine learning operations might include training a neural net. Training the neural network may include training a convolutional neural network, such as a convolutional short-term memory (LSTM), neural network. The LSTM can be used to classify/label one or more time-series data points across a specified vehicle set and then use these classifications in order to create a vehicle profile. A convolutional LSTM can be used to analyze vehicle video data and determine vehicle movement patterns based on vehicle driving conditions. A convolutional LSTM neural system can have a first CNN layer that contains regularized multilayer perceptionrons, a second LSTM layer and a dense layer at the output of its convolutional LSTM network. Each perceptron can be linked to a set perceptrons in a CNN layer. The CNN filter contains a vector of weights, a bias value and a filter for each perceptron. A CNN filter may contain multiple layers. There are many regularization options available for CNN. These include introducing a dropout, applying a DropConnect operation, stochastic swimming, using artificial data and adding a weight decay term. A CNN filter may also be used to limit the weight vector’s magnitude. These regularization schemes are possible for other neural networks at any level of a multilayer vehicle-learning infrastructure, as described further below.

“In some cases, a LSTM neural system may contain a number of cells. Each cell may have a different set of perceptrons. A LSTM cell might have a first perceptron that acts as an input gate for receiving a value, a second perceptron that acts as a forgetgate to determine how much value is still stored in the cells, and an out gate that determines an output value from each cell based on the value stored within the cell. Implementation of the LSTM network or features such as a forgetgate may assist with the interpretation of time series data.

“Some embodiments might include elements of attention mechanisms in implementing machine-learning methods executed on the local computing layer. Some embodiments can include attention mechanisms using an agent to execute program code to run a transformer-model for machine learning. An encoder module can be included in a transformer model. The encoder module could include a first multihead self-attention layer as well as a feed forward layer. Multi-head self-attention layers can apply a softmax function to one or more vectors in order to generate a set attention weights. The vectors can be based upon an input sequence and the set of attentionweights can be determined based on the individual elements of the sequence. The input sequence and attention weights can then be used in the first feed forward layer. The attention weights and the input sequence can be combined in a first feeder layer. This will allow each event to be weighed by the respective attention weight. A decoder section of the transformer can use the output of the feed forward layers. The decoder may also include other multi-head-weight self-attention layers with different weights and other value from the first layer. One or more feed forward layers may be included in the decoder portion. These layers can have different weights and values to the first feed forward layer. The output from the decoder section of transformer can be used for categorizing inputs or generating inferences. If the input sequence is a series of time-based swerves, an agent may execute a neural network with an attention mechanism to determine whether the swerves in the interval are safe or dangerous based upon past and current vehicle operations. If the threshold number of risky swerves is exceeded, an agent can instigate the local computing layer to adjust at least one of the LIDAR warning ranges, steering wheel responsiveness or anti-lock brake system responsiveness.

“Some embodiments may implement aspects federated learning methods (FL). A federating application that executes on the top-view computing layer or local computing layer of the distributed hierarchical model of learning may choose a subset to be trained during the implementation of an FL method. A federating application can transmit a global model of each vehicle to one or more computing devices. Each vehicle selected may have one or more agents that perform training operations on the global model. These training operations are based on their respective sensor data, sensor results, or other data. The vehicle-specific results may then be sent to the local computing layer (or top-view computing layer) where each vehicle-specific result may be encrypted and not interpretable by other applications or agents.

“For instance, each vehicle selected may transmit its state values (e.g. After encrypting the state value into cryptographic hash, gradients and perceptron weights can be transmitted from a training operation at a local computing layer to a data centre. A number of applications, either on the top-view or local computing layers, may modify a global model using the results from the selected vehicles. The modified global model can then be transmitted to the appropriate vehicles for further training. For example, a data centre may be given a cryptographic haveh of Wa1 for perceptron 1 from vehicle 1, and Wa2 for perceptron 2 from vehicle 2, which corresponds to perceptron 1 of the global model that was pushed to vehicle 1. An agent operating on the data centre may respond by computing a measure Wa-avg using the cryptographic havehes of the weight Wa1 for perceptron A from vehicle 1 and Wa2 for perceptron A from vehicle 2, corresponding to the perceptron A of the global model pushed to both vehicle 1 and 2. A summary statistic is one that summarizes a data set. It can include a median, mode, average, mode or other statistics. Application running on the top-view computing or local computing layers may update the global models such that perceptron (A) in the updated global models has a Wa-avg weight before any further iterations of the federated-learning process. An output based on Wa-avg, e.g. The updated global model can be sent back to the applications running on the vehicle computing layer. Applications may then execute their respective decryption operations.

“In certain embodiments, operations such as sending weights or performing operations based upon the weights can be done without encryption. If weights aren’t encrypted, the calculated measurement (s) or outputs based upon the measurement(s), can be sent back and used by applications on vehicle computing layer. Noise may also have been introduced during the process. This may affect the accuracy of the training operation but increase the privacy and security of the data provided to the FL training operations.

“Some embodiments may implement transfer learning between vehicle computing devices within the vehicle computing layers and between vehicles of other layers in the multilayer vehicle learn architecture. An agent may train one or more machines-learning systems to predict a task. The stored weights, biases and other data of these machine-learning systems can be used for training the next prediction task. Some embodiments allow for the transfer of weights, biases and other values from a shared machine learning model between vehicles in vehicle computing layers, between local computing devices, or between top-view computing apps in a top view application layer. A first vehicle could transfer perceptron weights from a neural network to another vehicle, for example.

“Some embodiments may use transfer learning with a federated framework. This can be called a Federated Transfer Learning (FTL). A first vehicle computing device could train a learning algorithm based on a set of training data, and then transfer a set of encrypted results with training gradients to another vehicle computing device. A second set may be stored on the second vehicle computing device. This second set may consist of encrypted results derived from a training operation using a second data set. The first and second sets of training data might not have mutually exclusive features. The second vehicle can then transfer both the first and second sets of encrypted results to the data center at the local computing layer. The local computing layer’s data center may then update the global model using training gradients and shared features from the first and second vehicles and send the updated global back to the first and second cars.

“Some embodiments may employ multiple classifier systems, such as an ensemble-learning method. The ensemble learning method can include combinations of different machine learning methods. Some embodiments may use elements from one type of learning model to another. Some embodiments include operations to add a multi-head self attention layer to the outputs of a RNN neural net. In some embodiments, both the attention weights as well as the results can be used to input additional layers in a machine-learning prediction or training operation.

Some embodiments include using one or more learning methods to predict future vehicle behavior based upon sensor data from multiple vehicles. In some embodiments, for example, a local computing layer might train and then use an ensemble machine-learning system that includes an attention model and a CNN. This ensemble system uses vehicle velocity data, vehicle geolocation data and LIDAR-detected objects data from multiple vehicles to predict whether a vehicle with a specific vehicle profile will be involved in an accident within a 1-month period. The internal state variables (e.g. Local computing layer results may include the outputs of use and training biases and weights.

“Some embodiments include the training of a machine-learning program or using a trained version one or more of these machine-learning programs. Machine-learning systems can be used to predict and categorize vehicle behavior, or operator behavior, based on sensor data from multiple vehicles. In some embodiments, the data center at a local computing layer might train and then use convolutional neural networks with attention mechanisms to review inward-facing cameras to determine if a vehicle operator was looking out of a front window with both his hands on a steering column. The internal state variables (e.g. Local computing layer results may include biases or weights, as well as outputs from trained convolutional RNNs.

“Some embodiments could include training and using one of the above learning methods to determine a risk associated with operating a car or to adjust it to reduce its risk. Learning methods can be based on sensor data or other data that is available to the local computing layer. These data could include data stored in vehicle profiles, vehicle operator profiles, road network graph data, or the like. One or more segments/nodes within a road network graph can represent the risk of a collision, or any other hazardous incident. This risk value can be determined using a variety of methods. The relevant attributes of a location may include the average vehicle speed within that place, the average vehicle accelerations within that place, the average vehicle impulse within that place, and the average frequency with which vehicles engage brakes within that place. Some embodiments can be trained to increase the risk value of a vehicle by increasing vehicle speeds and decreasing average vehicle accelerations while traveling on a road segment.

“In addition, machine-learning algorithms may be trained so that a road segment is more at risk due to the presence of cyclists, pedestrians, and other large moving objects capable of causing harm or injury to the vehicle. A vehicle’s proximity data may be used to determine the density. An increase in density can result in an increase of the risk. Some embodiments increase the risk value by 1% for each large moving entity that is detected by a vehicle LIDAR scanner within a 100-meter segment of road. A vehicle’s risk value may increase depending on how fast or slow moving objects are around it. A vehicle’s risk value may rise if one or more cyclists travel faster than the vehicle on a given road segment. Similar methods can be applied to other vehicles as well, such as battery-powered electric scooters, which users can rent using a native app. There are native apps that allow you to rent various types of transportation such as scooters and bicycles. These providers might provide geolocation data that indicates the speed, intensity, and other information about their transportation devices. Based on this data, some embodiments can determine risk to roads and other places.

“Some embodiments may compute one or several control-system adjustment value directly from the known values with regard to one or multiple vehicles, one/more vehicle operators, and one/more places through which the one/more vehicles are traveling. Some embodiments can reduce the risk associated with driving a vehicle through certain places by applying one or more machine-learning operations. This is represented as a road network graph. To train a machine learning system, some embodiments might use an objective function. The objective function could be based on the vehicle geolocations and control-system data. It may also include vehicle adjustment parameters that can be modified within a given parameter space. The objective function can be modified in some embodiments so that the determined risk level is not dependent on any of the road risks elements, control system data or vehicle proximity data. Some embodiments may include information that was stored in the vehicle profile, or corresponding operator profile, during training and then use a neural network to adjust control-system value. Some embodiments, for example, may use machine-learning systems to determine the accelerator response rate. The rate at which the vehicle accelerates when an operator presses an accelerator pedal on the vehicle is determined by a combination of the inputs of other moving objects, the force that the vehicle operator presses an accelerator pedal and the number recorded accidents in the vicinity of the vehicle’s geolocation.

Summary for “Vehicle-data analytics”

“1. Field”

“The present disclosure is general in nature and more specifically to computing and vehicle data analytics.”

“2. “2.

“Traditionally, people who analyze large amounts of data related user geolocations have used data from mobile devices. This includes intermittently reported geolocations. These systems are not designed to handle geolocation data from vehicles or automobiles. These vehicles could have a range of electronic devices including sensors, geolocation devices and transmitters that can be placed in many vehicles. These vehicles can be remotely controlled, semi-autonomous, or fully autonomous. These vehicles can provide high-accuracy data with high dimensions and high bandwidth that exceeds traditional analysis. These data include geolocation that is centimeter accurate and provided every 1 to 500 microseconds. These geolocation data can be combined with high-dimensional data like video feeds and light detection and range (LIDAR), ultrasonic depth sensors and on-board vehicle diagnostics sensors.

“The frequency and dimensionality of geolocation data as well as associated sensor data from vehicles can be used to provide insights and actionable data to different stakeholders in the transportation community and infrastructure community. The existing infrastructure is not equipped to handle the high-frequency, high-dimensional data from modern vehicles. This is partly due to the inability of the existing infrastructure to handle the volume and variety of data coming from multiple vehicles. The problem cannot be solved by simply adding more computing power. Traditional analysis is difficult because of the frequency, sparseness and multi-dimensional nature vehicle sensor data. This makes it more difficult to perform large scale analysis and lowers confidence in analysis results. This problem is made worse by the fact that multiple vehicles might report different sets of sensor measurements using different calibrations or standards.

“The following is an incomplete listing of aspects of the current techniques. These and other aspects will be described in the disclosure.

“Some aspects include the processing and transferring of sensor data from vehicles across various computing levels in order to perform machine-learning operations that compute adjustments for vehicle control systems.”

Some aspects of the invention include: obtaining a initial set geolocations and a preliminary set control system data. The first driver includes one or two processors that execute a vehicle application as part of a vehicle computing layers. The first set includes data indicating the use of a control device. The first set also includes data indicating the use of a control system. A top-view top-view computer application executes on the top view computing level and receives the second set and third sets of geolocations from a region.

“Some aspects include a tangible non-transitory machine-readable medium storing instructions. When executed by a data processor apparatus, the data processing apparatus performs operations including the above-mentioned processes.”

“Some aspects include: one or more processors; memory storing instructions that, when executed by processors, cause processors to execute the operations described above.

“To address the problems discussed herein, inventors had both to invent solutions and recognize problems not seen or foreseen by others in data analytics. The inventors want to stress the difficulty in recognizing problems that are still emerging and will be more obvious in the future if industry trends continue as they expect. It is important to understand that different problems can be addressed and that not all embodiments will address every problem in the traditional systems or offer every benefit. These improvements can be used to solve different problems.

“Some embodiments may use a federated machine learning architecture to support active learning to infer things regarding vehicles, drivers and places based upon relatively high-bandwidth road-side and on-board sensor feeds. Some embodiments may place computing resources in hierarchical order, e.g. at the vehicle (or subsystem, like the braking system), level, at neighborhood level, and at supraregional levels. Lower-level computing resources may apply lossy compression to lower-level data using machine-learning techniques, before reporting upwards as inputs to higher levels of the architecture. There are several machine-learning methods that can be used to improve machine learning, such as unsupervised machine-learning, reinforcement learning, and supervised machine-learning. Some embodiments combine multiple models at different layers. In some cases, optimization of model parameters according to an object function only within a model’s context, while in others optimizing model parameter across models as well.

The resulting set of machine-learning models can be used for different purposes. Vehicle manufacturers and fleet operators might configure different adjustable attributes to make vehicles more appropriate for their users, whether they are located in a particular place, time, or combination thereof. Sometimes, government agencies can take action to reconfigure roads and other infrastructure in response to machine-learning outputs, especially at the higher levels of hierarchy. Fleet operators, such as those involved in trucking, ride-share platforms or delivery services, may adjust the routing of their fleet to accommodate geographic changes based on outputs from the trained models. Some embodiments may allow for the enrichment of geographic information systems to encode road segments, intersections and other points of interest as inferred from machine-learning models. Parts makers, such as tier 1 ones, can adjust the design or configuration of parts according to the outputs from the machine-learning model. Some insurance companies and road-side service providers might customize their offerings for fleet operators and consumers based on the outputs of the trained model.

“FIG. “FIG. Some embodiments may allow the computing environment 100 to be used to address some of these problems. The computing environment 100, for example, may be designed to address various problems associated with the processing of vehicle data using computer systems, such as vehicle geolocation histories and vehicle sensor data streams. Multi-layer hardware and software infrastructure may be used in some embodiments to create a multilayer vehicle-learning infrastructure. This infrastructure can receive vehicle sensor data, push control-system adjustments, or other adjustment values, based on machine learning operations that include sensor data from multiple vehicles and operator metrics stored within vehicle operator profiles. These vehicles can communicate with an application running on a local computing layer, a top-view computing level or cloud-based computing system via wireless base stations or cellular towers. Wireless base stations can be wired to a data center, or another computing device in a local computing layer. The wireless base station could also be wired with a cloud-based computing service or application operating on top-view computing layers. The base stations could be used as access points for Wi-Fi or cellular networks. Networks for collecting sensor data from vehicles within wireless range of respective base stations (e.g. in wireless ad-hoc or centrally managed Mesh networks). In some embodiments, the data centre (or another computing device) that computes results from sensor data from one or more vehicles can communicate additional results to a central computing system or additional centers as part of a cluster.

“In some cases, multilayer vehicle learning infrastructures may be used to adjust vehicle operations using vehicle sensor data and geolocation data. Multilayer vehicle learning infrastructures may in some instances ingest streamed data and apply different machines-learning operations based upon a particular computing layer. Results or application states can be communicated in different degrees between different instances and among different computing layers. Multilayer vehicle learning infrastructure might include execution of various types of machine-learning and other computations at a vehicle computing level, a local computing level, and a top view computing layer that is adapted to each layer’s bandwidth and resources.

A suite of vehicle sensors 104 may be included in a vehicle that is traveling on a road 102. A vehicle’s sensors 104 can feed diverse sensor data to a vehicle learning infrastructure. This infrastructure may be able to infer different attributes about the vehicle, its operators or the places it travels. Large amounts of data may include more than 50 Megabytes or minutes, more than 100 Megabytes or mile, and more than 1 Gigabyte or mile. In some cases, sensor data can be simultaneously obtained. An assortment of sensor information can be obtained simultaneously. This includes more than five types of sensor, more than ten types of sensors, and more than 50 types. These attributes can be combined to create vehicle profiles, operator profiles or location profiles. Each profile can then be used to calculate vehicle adjustment values. Sometimes, profiles can be embedded in machine-learning model parameters. These attributes can also be combined to create risk indicators for locations, vehicle types or operator types. Vehicle adjustment values can then be pushed to vehicles when one or more risk levels exceed or meet a threshold.

“The vehicle sensors 104 can collect a variety of data including a vehicle proximity, vehicle control system data, motion-based data and geolocation. LIDAR, visual simultaneous location and mapping, radar readings or any other depth-sensor data can be used to determine the proximity of objects within a vehicle’s range. Vehicle proximity data may be used to determine the distance of objects in some cases. This could include a classification of the object from an ontology that includes cars, trucks and motorcycles as well as pedestrians, cars, trucks and bikes. An object orientation ontology may be used to classify the position or orientation of one or more objects relative the vehicle, as well as the movement and acceleration vectors.

“Control-system data can include data about components that are directly controlled and monitored by a vehicle operator. Any component that controls one or more elements of the vehicle’s control system can be included in the control system. A control system could include components like a steering wheel and brake pedals, accelerator pedals, turn signal switches, light switches, and the like. An operator can interact with the virtual elements to control the operation of the vehicle. A control system may include elements that are displayed on a screen, or augmented/virtual realities glasses. Data from the control system may include information such as steering wheel angular velocity and cruise control speed. Adaptive cruise control behavior selection can be made. Applied braking force can also be used to determine if a turn signal has been activated.

“Motion-based sensor information may include any data that represents physical motion of a vehicle, or of a component of the car, and can be captured using a motion sensor. Devices such as an accelerometer or gyroscope sensor can be used to measure motion. The data from motion-based sensors may include vehicle velocity and acceleration, vehicle turn radius, vehicle impulse, wheel slippage, vehicle speed, and vehicle velocity. Motion-based sensor data may include, for example, an acceleration profile which includes an assortment of acceleration readings taken by an accelerometer over a period.

“Vehicle operation data” may contain data about the operational status of a vehicle or one component or set of vehicles. A variety of sensors can be used to acquire vehicle operations data, including the ones described above. Another way to acquire vehicle operations data is with a temperature sensor or chemical sensor, resistivity sensors, and others. The vehicle might include data like the temperature, fuel-air ratio, oxygen concentration in exhaust, rotations per minute, tire pressure, battery status for electrics, etc.

“For example, an operator may activate a turn signal after seeing an anomalous object 110 and rotate a steering column. The brake pedal is pressed to slow down the vehicle. This example could include data such as the angular rotation (e.g. Force in Newtons (over time) on the brake pedal when decelerating. This example could include data such as vehicle speed or vehicle radius during deceleration. Data related to vehicle operations may include data such as the loss of electric power due to the turn signal or an engine temperature measurement.

“Vehicle sensors are capable of sensing other information, such as vehicle condition, vehicle operator statuses, and distances to objects within the vehicle’s surroundings. Some channels of sensor data can be represented as a point cloud made up of vectors, each with an origin that corresponds to the vehicle’s position. LIDAR radar and ultrasonic sensors can all be used as vehicle sensors. Some embodiments may also include LIDAR, radar, or ultrasonic sensors. Some optical cameras can include stereoscopic arrays or other spatially arranged arrangements of cameras. One or more optical cameras can provide sensor data to one or more processes in order to determine the geometry of the vehicle’s surroundings based on variation in coloration and focus, parallax or projected structured light location, occlusion or the like.

“Some embodiments could include satellite navigation sensors such a global positioning system (GPS), Global Navigation Satellite System(GLONASS), sensors for Galileo, and other on-board sensors. One or more geolocations. Some embodiments may also include radios that can sense beacons from terrestrial transmitters like cell towers and WiFi. Access points and location sensors may be used to triangulate the vehicle’s position using such signals.

“Some embodiments include one or more temperature sensors (e.g. for air intake and exhaust), wind speed sensors and road roughness sensors. They can also be used to detect distances to other vehicles. Light intensity sensors and inertia sensor are some examples. Others include sensors that perform dead reckoning using measured wheel rotations and compasses. Sensors that inform OBD-II about the state of their diagnostic system are another example.

“The vehicle 102 may have an onboard computing device (106), which can perform computations on sensor data. The computing device may be distributed on the vehicle in some cases. For example, it could include a group of processors that operate on data from different sub-systems like the braking subsystem, transmission subsystem, vehicle entertainment subsystem, vehicle navigation subsystem, suspension subsystem, and others. Onboard computing device (106) may use the computation results for one or more tasks. It can store or transmit the results to another layer. Operator profile and vehicle profile may be any record or group of records that represent dynamic and static information about the vehicle and/or operator. A vehicle profile can include records that indicate miles traveled, road conditions encountered, driver behavior, and so on. A vehicle profile may store information such as sudden swerves, objects within 10 cm of the vehicle, and decelerations in an events log. An operator profile can also contain information about the vehicle operator’s past events with regard to one or more vehicles. One or more processors may be used to execute as part of the vehicle computing layer in some embodiments. An example of this is an operator profile log, which may be stored in an array of strings. Each entry may contain a vehicle identifier, timestamp, and other relevant operational events, such as “vehicle IDLVICCRMND04; 202005-05 12.00; drove 3.5 hours without incident.

The sensor data may include raw sensor data, such as analog or un-transformed digital data. The sensor data could include, for example, a list of distances that can be detected by a LIDAR array and a steering wheels angular velocity. You can also include derived data in the sensor data. This is where derived data is calculated based on other sensor information. The derived sensor data may be stored as a numerical array or unsorted array. Alternately, or in addition to this, the derived sensor information can be converted into an ordered or more structured form, such a timestamped GPS geolocation. A timestamped geolocation may include an array. The first entry represents latitude. The second entry represents longitude. The third entry represents the time of recording. Finally, the fourth entry represents the confidence radius that is centered around latitude and longitude. This confidence radius could be calculated based on the GPS sensor’s confidence.

“Some embodiments may be able to receive derived sensor data that has been determined by one or more sensor-level processing units. A vehicle sensor could include an IMU (e.g. a three- or six-axis IMU), which may provide a numeric representation of a measured force and a classification value that indicates that the measured force meets a threshold value that indicates that the force is indicative that a collision is imminent. Some embodiments can also generate derived sensor information based on previously-derived or unprocessed data.

“The vehicle agent 108 can be executed by the onboard computing device. The vehicle agent108 may be an app and can listen for sensor data. Agent 108 can act as a listener and wait for one or more events to occur. Then, it may perform a set task in response. The event could include the receipt of unprocessed or processed sensor data. In response, the agent 108 can store or perform computations. The agent 108 can also poll other applications, devices or sensors to obtain sensor data. The vehicle agent 110 may perform data compression, feature, noise, privacy filtering and machine-learning training operations. The distributed vehicular data processing suite may include the agent 108, as described further below. A dynamic, self-updating set of algorithmic algorithms for learning models may be part of the vehicular data processing suite. It may also partially execute on the onboard computing devices 106. As described below, portions of the vehicular information processing suite can also be executed on a top-view computing layer or local computing layer.

“In some embodiments, an onboard computing device (106) may receive multiple channels of sensor data from a plurality sensors. For example, more then 10, more than 20 and, in most commercially relevant cases, more that 50. Each channel may contain one or more streams that can vary over time to encode different endogenous and exogenous aspects of vehicle or environment. The device 106 can associate the values with each other based on when they were received, or timestamps. Some embodiments allow for the grouping of sensor values into temporal buckets. For example, values may be grouped based on the most current state every 10 ms or every 50 ms or every 500 ms or more often. One or more machine learning models may be executed by the onboard computing devices 106 to produce one or more channels of output based on different subsets.

“Computations, and related activities, may be performed by the vehicle agent108. The vehicle agent108 can be part the vehicle computing layer in a multilayer vehicular learning infrastructure. Although certain tasks and algorithms are described as being performed on an agent running at the vehicle computing layer of a multilayer vehicle-learning infrastructure, other embodiments can perform these tasks and others at other layers. This disclosure describes an embodiment that employs an autoencoder to reduce sensor data’s feature space. However, other disclosures could use an autoencoder to reduce data sent from the local computing layers. The vehicle agent 108 can perform one or more tasks related to processing sensor information, analyzing sensor data, and taking action in response.

“Some embodiments may use lossless data compression methods like Run-length encryption, Lempel Ziv compression methods and Huffman Coding methods. One or more processors attached at a vehicle could apply the LempelZiv 77 algorithm for lossless compression of geolocation information by scanning the data with a sliding windows. One or more processors attached may also implement lossy data compression methods such as transform coding, discrete cosine transformations, discrete waveslet transforms or fractal compression.

“Some embodiments may employ one or more agents running on the vehicle computing layer in order to perform a machine learning operation. The machine-learning operation in some embodiments may include a training operation that uses sensor data to minimize an objective function. Some embodiments may employ a machine learning operation that predicts the response profile to a braking event upon detection of an object, or arrival at a specific geolocation. batch gradient descent, stochastic gradient descent, etc.) To minimize an objective function. Another example is a machine learning operation that predicts the most risky or vulnerable part of a vehicle’s operational performance based on data from sensors such as motion-based sensors and internal engine temperature and pressure sensors.

“As explained below, the results of a machine learning operation may include either the output of the operation or the state values of that operation. A machine-learning result could include the predicted acceleration pattern, the perceptron weights and biases that were used in the neural network. The machine-learning results can be used to initiate, stop, or modify vehicle operations. The vehicle agent 108 might use LIDAR sensor data in order to detect an anomalous object 110. Based on that classification, it may play a visual or auditory warning message. The machine-learning results can also be sent to a local computing layer or a data center. These machine-learning results can also be compressed, reduced or encrypted before being transmitted. Some cases require that the training be on-board. In other cases, some embodiments can offload training, while others may obtain model parameters through transfer learning.

“Some embodiments can generate or update a car profile or operator profile. These profiles may include vehicle data, or operator data that is equal or based on machine learning results. The process of updating a vehicle’s profile can include adding vehicle data to an existing profile, merging vehicle profiles, deleting vehicle profiles, and creating a new profile. The same goes for updating an operator’s profile. This could include creating a new profile, adding information to existing profiles, merging two profiles to create a single operator profile or deleting an operator account.

“In certain embodiments, a vehicle profile and associated operator profile could indicate an increased risk for accident when the vehicle is traveling on a road or road type that is associated with dangerous conditions. A vehicle profile can be sensitive to certain vehicle operators. This may lead to different risk values depending on the type of operator using the vehicle. These risk values can be normalized using the distribution of values for the population at the location. Some embodiments might include an operator profile that reports that a vehicle’s driver turns the vehicle faster than 95% or brakes quicker than 80% of other vehicles around a corner.

“In some embodiments, a vehicle profile or an operator profile could include a set associated geolocations that form a path for travel. The terminal ends of this path may be associated with additional data. As the vehicle 102 moves down the road 160, terminal ends along the route 160 could indicate that the operator has an online account. Some embodiments may cluster or up-weight geolocations associated with the beginning and end of trips in certain computations. This is to filter out non-terminal points. Some embodiments can detect terminal ends when a vehicle sensor detects less than a certain speed or less movement over a specified time period.

“In certain embodiments, data from a set sensors attached to a vehicle may be used for information about another vehicle. The second vehicle may then be stored with its vehicle profile. The vehicle sensors 104 can capture images, videos, and audio recordings of second vehicles 192. The vehicle agent (108) may analyze one or more recordings and create a second profile for the second car 192. It can also store the analysis results in the second profile. A second vehicle profile can include one or more features such as vehicle shape, vehicle color or license plate. The vehicle agent 110 can store the second profile in a nontransitory, computer-readable medium on the onboard computing device (106), or transmit it to a base station 120. The vehicle profiles of multiple vehicles may be combined to create a single profile for the same vehicle. This is true even if the detected vehicle does not transmit data or has no sensor outputs.

“In some instances, one or more vehicles’ onboard computing devices may form part of the vehicle computing layer in a multilayer vehicle-learning infrastructure. An agent can perform a training activity, and then send the results to another infrastructure layer in a federated learning architecture. Each of the computing devices that are attached to a vehicle may form part of an FL population. This allows them to perform training operations or other machine-learning operations. The FL round may allow for the transmission of training weights, results or other information from one or more agents to another layer without having to expose the training data to other agents on the same layer. These values may be used to increase the accuracy and speed of vehicle computing layer agent computations or provide instructions for controlling-system adjustments. A FL method is able to improve vehicle privacy and provide instructions for control-system adjustments. It also provides high predictive power for any machine-learning model.

“The vehicle 102 has a wireless network interface (105) that allows the vehicle to send and receive data via the base station 120. This is called a wireless data session.109. The sent data can include the raw sensor data from the vehicle sensors 104, as well as the analysis results generated by the vehicle agents 108. Data may be compressed, encrypted, or feature-reduced. The servers can be hosted by manufacturers of the vehicle, suppliers of equipment used within the vehicle, and other third parties in some instances. The radio can be a radio that transmits data between the vehicle and cellular base stations in some embodiments. The radio may include a satellite radio that can communicate with various satellite networks, either low-earth orbit or high orbit.

“Roadside sensor 134 is shown attached to a roadway device 130. Any sensor that is located close enough to one road to capture an auditory, visual or other feature (e.g. sensing cars on the roads) may be called a roadside sensor. Some embodiments include roadside sensors that provide roadside sensor data to further improve the predictive ability of multilayer vehicle learning systems. Roadside sensors may collect roadside sensor data. This may include temperature, pressure, humidity, still images and audio streams. The data from the roadside sensors may be used in some instances to verify the data collected from individual vehicles within the vehicle computing layer. For example, vehicle 102. These data can be used to validate sensor data transmitted by vehicles by computing devices for modeling validation.

“In certain embodiments, the roadside devices 130 can capture images, videos, or sounds of or associated with vehicles or objects. The roadside sensors 134 can capture images, videos, or sounds associated with both vehicle 102, and anomalous object 110. A roadside agent 138 can use roadside sensor data to execute machine-learning operations on one or more processors in a roadside computing device. The roadside agent 138 might classify moving objects in a way that distinguishes them from pedestrians, vehicles, or wildlife. Based on a video stream. Some embodiments may also correlate geolocation data from vehicles with known locations of roadside devices to aid in classification of events that correspond with unusual vehicle behavior. The roadside agent 138 could analyze the video stream captured by roadside sensors 134 and vehicle geolocation data from the vehicle 102 in order to identify the anomalous object 110.

“In some instances, the vehicle computing device (106) may first label the anomalous objects 110 and then the local computing center 122 may label them 110 based upon a visual recording taken by roadside sensors 134. Local computing data center 122 may recognize that the geolocations recorded by the roadside sensors are similar to the vehicle’s 102 geolocations and can analyze the detected features. The application can compare features based on their characterized shapes, estimated sizes, colors, and other categorical or quantifiable values. In some cases, the application can determine that the features are physically linked (e.g. .).”parts of a larger component, different views, etc

“The roadside device 130 can send data to a local computing level using methods similar as those used by vehicle 102. The roadside device 130, for example, can transmit roadside data and/or results based on roadside data to base station 120 via wireless data session 139. Some embodiments allow the roadside 130 to send and receive data via a wired communications network from the local computing center 122. Alternately, or additionally, the roadside devices may transmit roadside data 140 to a cloud computing app 140 in certain embodiments. The cloud computing application 140 is part the top-view computing layer.

“The vehicle computing layer data may be transmitted to a local program executing at the local computing data centre 122 of the local computing layer via base station 120. In some cases, the local application may include instructions to run a local computing agent 128 on the local computing center 122. The data may be analysed by the local computing agent 128. Some embodiments transmit at most a portion the sensor data via a standard interface like Sensoris. Alternately, or in combination with other interface standards, some embodiments can transmit at most a portion of the sensor information using Sensoris. You can send the results from the vehicle computing level to the local computing layers using various wireless protocols and devices, as described further below.

“In some embodiments, a data center 122 may acquire a first visual recording of the second car 192 from the vehicle sensors 104. Then, it can perform a machine learning operation on that first visual record to identify or record other features of the second car 192. OCR may be used in some embodiments to identify car shapes and colors. These features can then be used to map the vehicle profile. Some embodiments may acquire a second visual record to the second vehicle (192), such as one captured using roadside sensors 134. In this case, another machine-learning operation can be used to capture the features and map them to a fourth profile. A local application running on the data centre 122 might recognize similarities in the geolocations of the vehicle during capture and compare the third profile to the fourth vehicle profile using their mapped features. The local application running on the data centre 122 might determine that the third and fourth vehicle profiles correspond to the same second vehicle, 192 using similar or identical mapped attributes and merge these profiles. A vehicle profile that corresponds to the second vehicle may be marked as stolen or dangerous in some embodiments. A warning sign may be placed in the interior of the first car 102 to warn occupants about the risk associated with the second car 192.

Some embodiments can process large amounts of data, which may result in a high processing speed, memory complexity and processing complexity. Sometimes, there may be more vehicles reporting data than 100, 1000, 10 million, or 100 million to a data centre in a local computing layers or top-view applications in a top view computing layer. Some embodiments may have more than one, five, ten, or fifty sensors per vehicle. Some embodiments may have more than five, ten, or twenty different types of sensors on each vehicle. In some cases, multiple instances of the same sensor can be used to monitor the vehicle and its environment. Some embodiments may have heterogeneous vehicles within a vehicle computing layer. A vehicle computing layer could contain more than 10, 20, 50, or more types of vehicles. It may also include vehicles from more than two, five, or 10 model years. Alternately, vehicles could include more than 5% or 10% or 20% of the US public’s vehicle fleet. Some cases may see vehicles traveling over larger areas than 100 km or 10,000 km while reporting data to multilayer vehicle learning systems. This could be a problem in some countries. Some embodiments use concurrent processing frameworks such as Apache Spark? or Apache Storm? to process large amounts of data. Some embodiments can also use data warehouse platforms like Google BigQuery? or DataTorrent RTS? or Apache Hydra?.

“Some embodiments may use deep learning systems, such as deep neural network, deep belief networks or convolutional neural networks (CNN), mixed neural networks, and other deep learning systems. Deep learning systems can be used for various purposes, including adjusting vehicles in vehicle computing layers or transmitting data to other computing devices at the top-view computing level or local computing layer. These results may include a quantitative result or a category label, cluster, or any other outputs from a neural network’s output layer. These results may also include parameters and weights used in the neural network’s training operations, or weights generated during training.

Deep learning systems can be used in vehicle computing devices within a vehicle computing level, but the computing layers described here may decrease the benefits of computing expensive operations on devices at that layer. The local computing data center (122) may have more computing resources than the onboard computing device (106). This may allow the local computing layer to produce faster and more precise computations. The local computing layer might include control-system adjustments values that can be pushed back into the vehicles. These control-system adjustment value can be used to adjust the vehicle’s physical response to user control and automated control. The control-system adjustment value can be used to modify the vehicle’s response to any operator-controlled action, such as turning the steering wheel, activating the headlight or turn signal, activating the cruise control system, using the antilock brake system (ABS), and stepping on the accelerator. These values include vehicle braking responsiveness and vehicle acceleration responsiveness. They can also be used to adjust the steering wheel responsiveness when operating under cruise control or operator-controlled driving. This may prove useful in reducing accidents or other harmful incidents.

“The local computing agency 128 may update the vehicle profile for the vehicle102 using computations made by the local computing authority 128 which take into account data from other vehicles. The local computing agent 128 might update a vehicle profile to add a tag that shows that the vehicle 102 drives an average of 10% faster along the road 160, compared to the median speed of all other vehicles along the road 160. A similar update may be made to an operator profile. Some embodiments allow access to the attributes of locations where a vehicle travels and can be used to update its vehicle profile. A vehicle profile might indicate that there is more accumulated degradation when the vehicle 102 travels through an area with rough roads. This can be based on sensor data from multiple vehicles. Another example is that the vehicle profile of vehicle 102 might be updated to increase its wear value after it was determined that vehicle 102 has traveled to a location with known substances that can corrode vehicle components (e.g. salt, water-salt mixtures, acids, etc.).”

“In some cases, vehicle profiles can be accessed to determine insurance risk and schedule preventative maintenance. Some embodiments allow self-driving algorithms to be modified to reflect vehicle profiles. For example, self-driving algorithm may be programmed to reduce aggressive acceleration, cornering, deceleration or response to wear on the car or to higher road risk, such as at intersections, corners, exits, on-ramps or intersections. The multilayer vehicle learning platform may also perform machine-learning operations to assess the riskiness and other performance of autonomous driving systems.

“Results of the local computing layer computed with the local computing data centre 122 may be sent to a top-view cloud computing application 140. The top-view layer may have one or more processors that execute the top-view computer agent 148. In this case, top-view agent 148 could be a top view computing application that acts as an overriding agent and communicates directly with local computing agents and vehicle agents without having to use intermediary communication between layers. The tasks and algorithms described above may be performed by the top-view computing agents 148. The top-view computing agents 148 can also perform region-wide analysis using data from local computing layers. The top-view computing agents 148 can predict outcomes probabilities for whole regions of vehicle operators or entire vehicles populations. The top-view computing agents 148 can be used to determine the risk levels for specific regions, vehicle types, and operator profiles. These risk values can be used to correlate with specific incidents or vehicle designs in order to detect infrastructure optimization and vehicle design optimization.

“Some embodiments can be set up to create a location profile using data from individual vehicles from either the local computing layer or vehicle computing layer. One or more top view computing agents may be used to execute machine-learning operations based on data from multiple vehicles and data centres. The top-view computing agents 148 might classify road risks based on a collection of vehicle profiles and event logs. The top-view computing agents 148 can also determine whether a road is a school area and is congested between school hours. This is done using a trained neural network that uses vehicle speed histories and image data as inputs. The top-view computing agents 148 can raise a road risk value for roads that are in congested areas during school hours.

“Some embodiments may link a behavior, profile of an operator, or vehicle profile to a risk value. A top-view computing agent 148 might train or use machine-learning systems to determine the relationship between accident risk and vehicle speed. Some embodiments can also determine the risk value associated to a vehicle’s starting point, destination, operator, type of vehicle, specific vehicle or type. Some embodiments may assign a risk to a specific vehicle driven by a particular vehicle, based on its predicted destination. Alternately, or in addition to this, certain embodiments can determine or update a specific risk value based upon two-part relationships between vehicle operators and vehicles. Alternately, or in conjunction with this, certain embodiments can determine or update a danger value based upon three-part relationships between vehicle operator and a vehicle. These relationships are determined from a portion of a road network graph.

“In some embodiments, one of more processors from the top-view computing layers may track, merge, or associate a plurality vehicle profiles and operator profiles. Some embodiments allow for a greater number of passengers to be carried by vehicles than the number of vehicles. Vehicle operators and passengers other than the vehicle operator may also be able to occupy different vehicles at different times. An agent that executes on the top-view computing layer or local computing layer in some embodiments may calculate a probability associated to each vehicle and user pair.

“FIG. 2. illustrates a computing ecosystem 200 that may include various learning infrastructures in accordance to the present techniques. The computing environment 200 may include some or all the techniques described above. The vehicles 211-213 each have an onboard computer 215-217, a set of sensors 221-223 and wireless network interface controllers 225-227. One of the onboard computers 215-217 can process data collected by the sensors 221-223 and transmit it wirelessly to a vehicle analytics system 250. Third-party vehicle data providers 231-233 can also provide additional data that corresponds to one or more vehicles 211-213. A user may have access to network resources 245-247 when they are occupying one or more vehicles 211-213. Data derived from these network resources 245-247 can also be used for inputs into machine-learning operations by the vehicle data analysis system 250.

“Some embodiments could include methods to link operator records with user profiles, or other information from an online source. Some embodiments allow vehicle occupants or operators to carry various networked computing devices near or inside their vehicle. FIG. FIG. 2 shows that a networked computing device could be any of the mobile computers 241-243. Each of these mobile computing devices may correspond to an operator or passenger in one of the vehicles 211-213. A mobile computing device may include a smartphone, tablet, laptop computer, and the like. One example of a networked computing device is a wearable device such as a smartwatch, a head-mounted monitor, or a fitness tracker. These computing devices can include processors, memory and an OS in some cases. Some embodiments may also use data from other networked computing devices to aid in machine learning operations.

“In some instances, the computing environment 200 may include the vehicle data analysis system 250 that can receive data from any of these components. The vehicle data analysis system 250 can be executed on one server of a local computing layers, or on multiple servers of a distributed computing network as part of the top-view computation layer. Some embodiments may also include a vehicle data analytics 250 that may be used to store and determine attributes of vehicle operators, passengers, and places visited. FIG. 2 shows that the vehicle data analytics system 250 may include a user profile repository 251, a vehicle profile repository 252 and a geographic information system (GIS) repository 253. FIG. 2 shows that the vehicle data analytics system 250 could include a user profile repository 251, vehicle profile repository 252 or a geographic information systems (GIS) repository 253. The user profile repository 251 can store attributes about users, such as vehicle passengers or vehicle operators. The user profile repository 251 could store operator profiles, which may include average driving speeds. Vehicle profile repository 252 can store attributes about vehicles. The repository 251 might store one or more vehicle profiles, which may include a list of near-collisions and collisions for each vehicle. The geographic information system (GIS), repository 253, may store one or several road network graphs. It can also include attributes of roads, specific vehicle paths, and other relevant information. The GIS repository 253 might store traffic information and a road graph corresponding to specific road segments on the road graph.

“FIG. “FIG. To compute vehicle results in accordance to some embodiments, use 1 or 2. 300 shows operations that use a multilayer vehicle-learning infrastructure to adjust vehicle operations and affect vehicle performance. The multilayer vehicle learning infrastructure might execute one or more routines within the computing environments 100 and 200 in some embodiments. Some embodiments allow the multilayer vehicle learning infrastructure to execute one or more routines in the computing environments 100 and 200. Some embodiments allow the operation of process 300 to be accomplished by running program code stored on one or more instances a machine-readable, non-transitory medium. This may in some cases include different instructions being stored on different physical embodiments of medium and then executing the different subsets using different processors. herein.”

“In some embodiments, 300 may include receiving sensor data from vehicle sensors as indicated by block 302. This may include data from the vehicle sensors 102 that gather information about the vehicle 102 as well as the environment around it 102. These sensors can collect various types of sensor information, including motion-based sensor data and control-system sensordata. One or more sensors may be attached to a vehicle and capture images, videos LIDAR measurements, audio measurements or airbag deployment. A vehicle sensor might be able to obtain vehicle geolocations and control-system data, such as steering and ABS positions, along with vehicle proximity data, in the form LIDAR data, every 50 milliseconds during a trip.

“Some embodiments can also refine sensor data to implement an active learning approach. After detecting a potentially dangerous behavior, some embodiments might display a query to the vehicle operator. A vehicle might display a query to the operator after detecting a swerve event. This could include a question about whether a collision was avoided or whether LIDAR systems provided appropriate warnings. A query response can be sent to a local program running on a local computing layer. The query response could be integrated into an active learning system for machine learning applications.

“Some embodiments may determine the derived sensor data from unprocessed sensor information using one or more processors in vehicle computing layer as indicated by block304. As the vehicle 102 in FIG. As the vehicle 102 of FIG. 1 moves along a route digital or numerical data may be collected from vehicle sensors and analyzed to determine the derived sensor information. One example is to use a LIDAR-adapted program that executes on one or more processors. This will receive a numerical array representing LIDAR readings, and create an event record that indicates that a human-shaped object was within the vehicle’s threshold distance. In some embodiments, any combination of sensor data and derived sensor information may be used to create additional derived sensors data.

Block 308 indicates that some embodiments may use one or more privacy functions in order to protect sensitive data. Sensitive data could include information about a vehicle operator, which may be used for identification purposes. Sensitive data can also include proprietary or confidential vehicle data, sensor data, or other data about a vehicle operator. Some embodiments allow for privacy operations to be applied, which may include the encrypting, modifiying, or removing certain types of data from sensors data, other data about vehicles, vehicle operators, passengers, and objects in the vehicle. One or more privacy functions may be called in some embodiments to increase the resistance of vehicles to being exploited by malicious entities.

Some privacy functions can delete identifiable data after being used. Privacy functions can erase data like audio recordings of vehicle occupants and video recordings of them, as well as images of vehicle passengers, biometrics such names, identification numbers, gender labels or age identifiers. The privacy function can be called to modify or delete the video recording, for example, if the operator is facing a windshield while driving.

“Some embodiments may inject noise in sensor data (or data derive or otherwise inferred from it), by using a noise injection mask and performing a differential privacy operation. Privacy-driven noise terms can alter sensor data, such as the acceleration log, destination coordinates, and vehicle operator appearance. An application may use the privacy-driven noise term to determine the value of a probability distribution. This means that the same probability distribution or defining characteristics can be used to compute on another layer than the vehicle computing layer. Examples of probability distributions include the Gaussian distribution or Laplace distribution, uniform distribution, and others. Some embodiments use noise injection to alter the sensor data, or to transform the sensor data results from an original set of values into a masked list of values. This can reduce the likelihood of tracking specific data back towards a specific vehicle or operator. Each vehicle that sends data may have a different amount of noise than the vehicles sending it. A FL method may also be used with noise injection operations, as described further below.

“Some embodiments may use machine-learning encryption methods. Example encryption methods may include partially homomorphic encryption methods based on cryptosystems such as Rivest-Shamir-Adleman (RSA), Paillier, ElGamal, and the like. Examples of encryption methods could also include fully homomorphic encryption techniques based on cryptosystems like Gentry, NTRU and Gentry-Sahai-Waters(GSW), FHEW or TFHE. One embodiment may use a fully homomorphic encryption technique to increase adaptability and robustness with machine-learning operations, as well as increase resistance to attacks by quantum computing systems.

Some embodiments can execute a smart-contract to receive a request for data about a vehicle or operator profile. Other embodiments might determine whether the request is authorized. Some embodiments can determine whether a request is authorized by checking whether it is associated (e.g. contains) a cryptographically signed amount that has been signed either by an individual that corresponds to the data or a delegate. Some embodiments can determine whether the cryptographic signature is associated with the individual or the delegate using a public cryptographic keys of the individual or delegate. Once the keys are confirmed to be compatible, some embodiments can send a record that allows access to the requested data. This may include a cryptographic key that may allow the data to be decrypted, or the data as it is. Some embodiments may include a cryptographic key sent or data encrypted with the public cryptographic keys of the requestor. This allows the requester to access the values in ciphertext using their private cryptographic keys. Multiparty signatures may be required for some embodiments in order to gain access to data. Some smart contracts, for example, may require both the person described in the data and the provider of the data (such as a fleet owner or car maker) to approve a request to gain access to secured data.

“Some embodiments could include performing a vehicle-layer machine-learning operation executed on a vehicle computing layers to generate vehicle computing layer results based upon the sensor data. Block 312 indicates this. Some embodiments may use an agent to execute on the vehicle computing layer’s processors. The agent may implement a machine-learning method to perform dimension/feature-reducing tasks such as by using a neural network in the form of an autoencoder, wherein an autoencoder is a neural network system comprising an encoder layer and a decoder layer. The number of nodes in the input layer of an autoencoder is equal to the number of nodes in the output layer. One or more processors can be used to train the agent’s autoencoder on unprocessed LIDAR information. The trained autoencoder can convert the unprocessed LIDAR dataset into a form that is easier to understand and requires less data. These reduced parameters may be used to create an efficient ontology from sensor data by some embodiments. Some embodiments recognize that an autoencoder reduces data from 20 sensors so that 95% of the values are from five sensors. Some embodiments provide instructions for transmitting only data from the 5 sensors. Additional sensors can be attached to vehicles to provide additional sensor information. The agent may then use the additional sensor data to generate results. The agent may also perform additional training operations to retrain machine-learning parameters and weights after additional sensor data has been provided.

“In some embodiments, vehicle layer machine learning operation may include a training operation using sensor information, wherein the training operations can also include supervised learning operations based on an object function. The supervised learning operation may treat sensor data as input in order to minimize an objective function. An objective function is a sum of one or more outputs from a machine-learning program and known values in a training dataset. One example is to train a vehicle layer-machine-learning system that uses vehicle proximity data to predict user reaction times. This may be done using gradient descent to optimize the objective function. For example, the objective function may be the difference between measured reaction time and predicted reaction times generated by the machine-learning program. You can also make predictions using supervised learning, such as braking duration and maximum steering wheel angle. These predictions are made by vehicle operators during turns.

“Alternatively, or additionally, vehicle layer machine learning operation may include using either a trained or unsupervised machine?learning system to determine an output. Some embodiments use a trained neural network, for example, to predict the duration and arrival time of a braking event based on the detection of an object. Alternately, or in addition to the vehicle layer machine learning operation, an unsupervised machine learning system may also be used. A deep belief neural networks may be used to analyze sensor data, such as vehicle proximity and braking information. The deep belief neural networks may also include unsupervised networks, such as autoencoders and restricted Boltzmann machine. These outputs can then be divided into data clusters. Further operations include classification of these data clusters for expected braking events or unexpected braking events.

“Whether a vehicle-layer machine-learning operation can be classified as a training operation, or an operation to use trained systems may depend on whether the training mode has been activated. One embodiment allows for multiple machine-learning operation simultaneously. At least one of these machine-learning operation is a training operation, and at least one other is an operation to use the trained system.

“In some embodiments, the vehicle layer machine-learning operation may determine a sensor inconsistency based a machine-learning-driven analysis of sensor data. A neural network can be trained to run self-diagnostic tests using idle vehicle behavior and driving behavior in order to detect defective sensors that self-report not as defective. A driver might swerve in order to avoid an object that the LIDAR sensor failed to detect. A vehicle computing device might first detect a swerve event based on the steering wheel angular velocity. Then, it may use a machine-learning system that uses sensor information (e.g. To determine if a collision avoidance event has occurred, the application may use data from sensors (e.g. turn signal inputs and steering wheel velocity) to detect it. The driver may be able to confirm that the collision avoidance event occurred by asking the application. The vehicle application can perform various tasks once it has confirmed that the collision avoidance event took place. These include instructing other machine learning systems to retrain sensors without using LIDAR sensor input, warning a vehicle operator that the LIDAR device may be defective, and transmitting instructions to either a top-view computing layer or local computing layer to ignore LIDAR data.

“The outputs and state values of a machine learning operation may be included in the results of a machine learning operation, as described further below. The machine-learning results from a machine learning operation can include both the predicted acceleration pattern as well as the perceptron biases or weights used in the neural network. These machine-learning data can be used for further processing or sent to a local computing layer. One embodiment of the machine-learning operation includes training a neural network for a task. In this case, one or more training classes are generated based on an operator’s response to an active learning query. This machine-learning operation can be done in conjunction with the above-described autoencoder operation.

“In some instances, the vehicle computing device can determine a baseline region for a vehicle using data from its set of sensors. A specific set of values, weighted sum or weighted products may represent the baseline threshold region. The baseline threshold area may include a combination of a velocity range, a temperature range and a function dependent upon the velocity. The measurement space of sensors may define the baseline threshold region. This could be a normal, non-anomalous vehicle or sensor function. In some embodiments, it may be possible to determine if the baseline threshold region has been exceeded by abnormal sensor measurements. Some embodiments might find defective sensors after reviewing sensor history and using data triangulation. This allows for compensation for any defects or failures in one of the sensors. Some embodiments might determine that the engine temperature sensor has failed after exceeding a threshold area. Some embodiments will display a message to indicate faultiness or send instructions to the local computing layer, or directly to top-view computing layer, to delete data from the temperature sensors.

“In some embodiments, an agents may use machine-learning to analyze sensor data and produce results to adjust a vehicle’s control system. The results can also be transmitted to a base station or server operating at a data centre, cloud computing service, or serverless application. The agent might include algorithms that detect angular acceleration and classify the event as a swerve, not a change of lanes. An event record may be created by the agent to identify the category in which the angular acceleration occurred. This event can then be associated with an event time and a vehicle profile. The vehicle profile will include the event record in an events log. Another event record could include a pothole avoidance, a pothole strike, an event time, an event time, and a hard acceleration. The vehicle profile also includes the event log. The results of agents from individual vehicles can be combined with other features to create a road network graph. In this case, the feature can be represented as one or more indicators, values, or indicators that indicate a road status. Some or all of the results can be processed with an encryption method that is similar to those used for block 308

“In some embodiments the process 300 may include updating a vehicle or operator profile based upon the vehicle computing layer results. Block 316 is an example. One embodiment may update the vehicle’s profile by recording discrete events in an event log or intervals in an event stream. This could include a set timestamped events, other data, or any combination of these types of entries. One example is that some embodiments might record an entry in the event log with a label “bicycle”,? an angle value ?30,? A distance value of?0.15, A distance value of?0.15,? Some embodiments can record static values in a record profile such as vehicle identification number, engine type, vehicle type and the like. Some embodiments might record, for example, that a vehicle’s vehicle profile includes an electric vehicle.

“Some embodiments may update a vehicle operator profile based on their use of different vehicles and other information. Some embodiments may call one or more privacy functions in order to anonymize, remove noise from, or otherwise anonymize vehicle operator information. Some embodiments can generate an operator profile and the corresponding operator profile identifier in certain cases, such as when privacy functions are not used to identify a vehicle operator. Some embodiments may use driving patterns to determine the operator profile. One embodiment may associate more than one profile with a vehicle. Some embodiments may create and characterize an operator profile based upon operator metrics, such as average or maximum vehicle speed, average/range of vehicle acceleration, average/range of vehicle deceleration and average/range of vehicle quartering speeds. A second profile is created based on any detected differences in one or more operator metrics.

“Some embodiments may also merge or link different operator profiles, based on a mutual association to the same vehicle or vehicle profile. One embodiment may combine a first profile with another operator profile when they determine that the two profiles have the same vehicle in a vehicle’s history entry. They also share events recorded in their event log. Some or all profiles can be processed with an encryption method that is similar to or identical to block 308 described above.

“In some embodiments, 300 may include obtaining roadside sensor data using roadside sensors as indicated by block 318, This may include the use of roadside sensors 134 as shown in FIG. 1. To collect roadside data. Roadside sensors can be used to identify and track vehicles, provide information about roads, and provide additional data for road features. Similar to blocks 304 and 316 for vehicle sensor data, roadside sensors can be encrypted, reduced in dimension, and analyzed to provide additional data to local computing layers for machine-learning operations. One or more local computing layers may, for example, examine a visual record to identify and label roadside features.

Roadside sensors such as traffic cameras and microphones may report data from roadside sensors that could be associated with a vehicle. This may include a correlation of GPS, optical character recognition (OCR), of license plates, wireless beacons with identifiers transmitted and received by roadside sensors or other methods. Roadside sensors can detect vehicles with a pre-existing vehicle profile in some cases. Some embodiments can determine whether a vehicle detected on the road is identical to the vehicle with the pre-existing profile based on known geolocation data, visual data and audio data. Some embodiments might detect the signal from a wireless tire pressure gauge at a traffic light, and use OCR to determine the license plate sequence for that vehicle. As described further below, some embodiments may identify this signal as belonging the first vehicle or recognize a matching sequence of license plates. Some embodiments will update the vehicle profile in response to the signal. This allows them to track the presence of vehicles within close proximity of roadside sensors.

“In some embodiments, process 300 may include sending vehicle-layer or roadside sensor results, as indicated in block 320. Some embodiments allow for wireless communication to local data centers using wireless methods such as a wireless base station. Data can also be sent to local computing levels via a wired connection to the local computing data centre. Local layer machine learning operations can be performed by one or more processors that are accessible to the local computing layers. These operations, which may require more resource-intensive methods of machine-learning than the vehicle computing layer, may be done using local layer computing layers.

“One or more local computing device centers or other computing devices may ingest any type or sensor data, results from sensor data or other data from a plurality vehicle in the vehicle computing layers for machine-learning operations and other computations. The local computing layer devices may also ingest data from roadside sensors to perform machine-learning operations and other computations. A data center at the local computing layer might receive, for example, a set perceptron and machine-learning outputs and geolocation data from the first training operation at the vehicle computing layers and, for the second training operation at the vehicle computing layers, a set perceptron and machine-learning outputs and geolocation information for the second vehicle. Some embodiments allow vehicle-layer or other results to be sent directly to other layers, without or with the assistance of a local computing layer. As an example, vehicle-layer data may be sent directly without having to pass through the local computing layer.

Block 322 indicates that some embodiments can include the determination of a road network graph using GPS data or geolocations from vehicle sensors. One embodiment may use an application running on the local computing layer to determine and utilize a road network map. The road graph can include a set or segments of roads. The road network graph may be acquired by a data center at a local computing level by selecting it from an external road graph repository, such as an API for Here Maps?. Some embodiments, for example, may create a boundary region by linking the geolocations to locations within a range of 1 m. The boundary region can then be used to select nodes and links within the region to form the road graph.

“Alternatively or additionally, some embodiments may track vehicles’ geolocations and connect them to create a road network. One or more road network graphs can be created by combining externally-provided data with vehicle-provided GPS coordinates. Some embodiments may use parts of the road graph to create sub-regions that include a number of road network nodes and segments. This allows for more analysis of the specific routes in the road network traveled by one or several vehicles. Some embodiments may use the road network shapes, traffic amounts, lengths or other data to aid in the machine-learning operations. A road network graph may also be generated by any of the computing devices in the top-view computing layer or vehicle computing layer.

“Some embodiments may use one or more machine learning systems to classify the regions of the road network graph. Some embodiments may use or train a decision-learning tree, classification tree, classification neural net, and the like to classify sections of road in a road system during specific periods of time as high or low risk. Other classifiers can be used by some embodiments to classify road segments in a road graph. This could include a school zone that has a high prevalence of drunk drivers or is associated with weather events such as rain. These classifications can be linked to or included in a road network graph in some embodiments. A high-risk region may be designated or indicated in some embodiments as an additional location for a roadside device, such as a traffic light sensor, security sensor, roadside sensor, or roadside sensor.

“Some embodiments may allow for the clustering of classes of road segments in a road network graph. This is possible to account for different attributes such as inclement weather, collisions, construction occurrences and car failures. Some embodiments may also search the vehicle computing layer for vehicles to identify the on-incident vehicles that were involved in each of the clustered segments. Some embodiments may cluster classes of road segments across large geographical regions to identify specific events and attributes. This can help to reduce the problem of data sparsity for low-probability events like traffic collisions. One or more neural networks described elsewhere may be periodically retrained based upon one or more clusters of road segments or their corresponding on?incident vehicles. The local layer machine learning operation described in block 324 may also include instructions for performing an additional training operation if the number of on-incident cars increases for clusters that contain events such as wildlife collision events, fatal accidents, heavy rain, and the like. Some embodiments may also generate a similarity scoring for a region based on the road graph of that region. This may allow for comparisons of city traffic flow metrics like vehicle density, pedestrian density and safety, as well as street geometry and collision events.

“Some embodiments could include performing one or several local layer machine learning operations executing on the local computer layer based upon vehicle computing layer results and roadside sensor results in order to determine one or two local computing layers results, as indicated at block 324. Some embodiments may also include the use of an agent to execute on the local computing layer’s processors in order to conduct a machine learning training operation. Alternately, or in addition to using an agent to train a machine-learning system, the local layer can also use the agent to determine an output. These outputs may include, but are not limited to, a predicted operator, vehicle behavior, a labeled driver or vehicle activity as well as a risk value and control-system adjustment value. Local computing layer results may include either one or both of the machine-learning system state or output values. These may be used to perform further operations or transmitted onto other layers in multilayer vehicle learning infrastructure.

“Some local layer machine learning operations might include training a neural net. Training the neural network may include training a convolutional neural network, such as a convolutional short-term memory (LSTM), neural network. The LSTM can be used to classify/label one or more time-series data points across a specified vehicle set and then use these classifications in order to create a vehicle profile. A convolutional LSTM can be used to analyze vehicle video data and determine vehicle movement patterns based on vehicle driving conditions. A convolutional LSTM neural system can have a first CNN layer that contains regularized multilayer perceptionrons, a second LSTM layer and a dense layer at the output of its convolutional LSTM network. Each perceptron can be linked to a set perceptrons in a CNN layer. The CNN filter contains a vector of weights, a bias value and a filter for each perceptron. A CNN filter may contain multiple layers. There are many regularization options available for CNN. These include introducing a dropout, applying a DropConnect operation, stochastic swimming, using artificial data and adding a weight decay term. A CNN filter may also be used to limit the weight vector’s magnitude. These regularization schemes are possible for other neural networks at any level of a multilayer vehicle-learning infrastructure, as described further below.

“In some cases, a LSTM neural system may contain a number of cells. Each cell may have a different set of perceptrons. A LSTM cell might have a first perceptron that acts as an input gate for receiving a value, a second perceptron that acts as a forgetgate to determine how much value is still stored in the cells, and an out gate that determines an output value from each cell based on the value stored within the cell. Implementation of the LSTM network or features such as a forgetgate may assist with the interpretation of time series data.

“Some embodiments might include elements of attention mechanisms in implementing machine-learning methods executed on the local computing layer. Some embodiments can include attention mechanisms using an agent to execute program code to run a transformer-model for machine learning. An encoder module can be included in a transformer model. The encoder module could include a first multihead self-attention layer as well as a feed forward layer. Multi-head self-attention layers can apply a softmax function to one or more vectors in order to generate a set attention weights. The vectors can be based upon an input sequence and the set of attentionweights can be determined based on the individual elements of the sequence. The input sequence and attention weights can then be used in the first feed forward layer. The attention weights and the input sequence can be combined in a first feeder layer. This will allow each event to be weighed by the respective attention weight. A decoder section of the transformer can use the output of the feed forward layers. The decoder may also include other multi-head-weight self-attention layers with different weights and other value from the first layer. One or more feed forward layers may be included in the decoder portion. These layers can have different weights and values to the first feed forward layer. The output from the decoder section of transformer can be used for categorizing inputs or generating inferences. If the input sequence is a series of time-based swerves, an agent may execute a neural network with an attention mechanism to determine whether the swerves in the interval are safe or dangerous based upon past and current vehicle operations. If the threshold number of risky swerves is exceeded, an agent can instigate the local computing layer to adjust at least one of the LIDAR warning ranges, steering wheel responsiveness or anti-lock brake system responsiveness.

“Some embodiments may implement aspects federated learning methods (FL). A federating application that executes on the top-view computing layer or local computing layer of the distributed hierarchical model of learning may choose a subset to be trained during the implementation of an FL method. A federating application can transmit a global model of each vehicle to one or more computing devices. Each vehicle selected may have one or more agents that perform training operations on the global model. These training operations are based on their respective sensor data, sensor results, or other data. The vehicle-specific results may then be sent to the local computing layer (or top-view computing layer) where each vehicle-specific result may be encrypted and not interpretable by other applications or agents.

“For instance, each vehicle selected may transmit its state values (e.g. After encrypting the state value into cryptographic hash, gradients and perceptron weights can be transmitted from a training operation at a local computing layer to a data centre. A number of applications, either on the top-view or local computing layers, may modify a global model using the results from the selected vehicles. The modified global model can then be transmitted to the appropriate vehicles for further training. For example, a data centre may be given a cryptographic haveh of Wa1 for perceptron 1 from vehicle 1, and Wa2 for perceptron 2 from vehicle 2, which corresponds to perceptron 1 of the global model that was pushed to vehicle 1. An agent operating on the data centre may respond by computing a measure Wa-avg using the cryptographic havehes of the weight Wa1 for perceptron A from vehicle 1 and Wa2 for perceptron A from vehicle 2, corresponding to the perceptron A of the global model pushed to both vehicle 1 and 2. A summary statistic is one that summarizes a data set. It can include a median, mode, average, mode or other statistics. Application running on the top-view computing or local computing layers may update the global models such that perceptron (A) in the updated global models has a Wa-avg weight before any further iterations of the federated-learning process. An output based on Wa-avg, e.g. The updated global model can be sent back to the applications running on the vehicle computing layer. Applications may then execute their respective decryption operations.

“In certain embodiments, operations such as sending weights or performing operations based upon the weights can be done without encryption. If weights aren’t encrypted, the calculated measurement (s) or outputs based upon the measurement(s), can be sent back and used by applications on vehicle computing layer. Noise may also have been introduced during the process. This may affect the accuracy of the training operation but increase the privacy and security of the data provided to the FL training operations.

“Some embodiments may implement transfer learning between vehicle computing devices within the vehicle computing layers and between vehicles of other layers in the multilayer vehicle learn architecture. An agent may train one or more machines-learning systems to predict a task. The stored weights, biases and other data of these machine-learning systems can be used for training the next prediction task. Some embodiments allow for the transfer of weights, biases and other values from a shared machine learning model between vehicles in vehicle computing layers, between local computing devices, or between top-view computing apps in a top view application layer. A first vehicle could transfer perceptron weights from a neural network to another vehicle, for example.

“Some embodiments may use transfer learning with a federated framework. This can be called a Federated Transfer Learning (FTL). A first vehicle computing device could train a learning algorithm based on a set of training data, and then transfer a set of encrypted results with training gradients to another vehicle computing device. A second set may be stored on the second vehicle computing device. This second set may consist of encrypted results derived from a training operation using a second data set. The first and second sets of training data might not have mutually exclusive features. The second vehicle can then transfer both the first and second sets of encrypted results to the data center at the local computing layer. The local computing layer’s data center may then update the global model using training gradients and shared features from the first and second vehicles and send the updated global back to the first and second cars.

“Some embodiments may employ multiple classifier systems, such as an ensemble-learning method. The ensemble learning method can include combinations of different machine learning methods. Some embodiments may use elements from one type of learning model to another. Some embodiments include operations to add a multi-head self attention layer to the outputs of a RNN neural net. In some embodiments, both the attention weights as well as the results can be used to input additional layers in a machine-learning prediction or training operation.

Some embodiments include using one or more learning methods to predict future vehicle behavior based upon sensor data from multiple vehicles. In some embodiments, for example, a local computing layer might train and then use an ensemble machine-learning system that includes an attention model and a CNN. This ensemble system uses vehicle velocity data, vehicle geolocation data and LIDAR-detected objects data from multiple vehicles to predict whether a vehicle with a specific vehicle profile will be involved in an accident within a 1-month period. The internal state variables (e.g. Local computing layer results may include the outputs of use and training biases and weights.

“Some embodiments include the training of a machine-learning program or using a trained version one or more of these machine-learning programs. Machine-learning systems can be used to predict and categorize vehicle behavior, or operator behavior, based on sensor data from multiple vehicles. In some embodiments, the data center at a local computing layer might train and then use convolutional neural networks with attention mechanisms to review inward-facing cameras to determine if a vehicle operator was looking out of a front window with both his hands on a steering column. The internal state variables (e.g. Local computing layer results may include biases or weights, as well as outputs from trained convolutional RNNs.

“Some embodiments could include training and using one of the above learning methods to determine a risk associated with operating a car or to adjust it to reduce its risk. Learning methods can be based on sensor data or other data that is available to the local computing layer. These data could include data stored in vehicle profiles, vehicle operator profiles, road network graph data, or the like. One or more segments/nodes within a road network graph can represent the risk of a collision, or any other hazardous incident. This risk value can be determined using a variety of methods. The relevant attributes of a location may include the average vehicle speed within that place, the average vehicle accelerations within that place, the average vehicle impulse within that place, and the average frequency with which vehicles engage brakes within that place. Some embodiments can be trained to increase the risk value of a vehicle by increasing vehicle speeds and decreasing average vehicle accelerations while traveling on a road segment.

“In addition, machine-learning algorithms may be trained so that a road segment is more at risk due to the presence of cyclists, pedestrians, and other large moving objects capable of causing harm or injury to the vehicle. A vehicle’s proximity data may be used to determine the density. An increase in density can result in an increase of the risk. Some embodiments increase the risk value by 1% for each large moving entity that is detected by a vehicle LIDAR scanner within a 100-meter segment of road. A vehicle’s risk value may increase depending on how fast or slow moving objects are around it. A vehicle’s risk value may rise if one or more cyclists travel faster than the vehicle on a given road segment. Similar methods can be applied to other vehicles as well, such as battery-powered electric scooters, which users can rent using a native app. There are native apps that allow you to rent various types of transportation such as scooters and bicycles. These providers might provide geolocation data that indicates the speed, intensity, and other information about their transportation devices. Based on this data, some embodiments can determine risk to roads and other places.

“Some embodiments may compute one or several control-system adjustment value directly from the known values with regard to one or multiple vehicles, one/more vehicle operators, and one/more places through which the one/more vehicles are traveling. Some embodiments can reduce the risk associated with driving a vehicle through certain places by applying one or more machine-learning operations. This is represented as a road network graph. To train a machine learning system, some embodiments might use an objective function. The objective function could be based on the vehicle geolocations and control-system data. It may also include vehicle adjustment parameters that can be modified within a given parameter space. The objective function can be modified in some embodiments so that the determined risk level is not dependent on any of the road risks elements, control system data or vehicle proximity data. Some embodiments may include information that was stored in the vehicle profile, or corresponding operator profile, during training and then use a neural network to adjust control-system value. Some embodiments, for example, may use machine-learning systems to determine the accelerator response rate. The rate at which the vehicle accelerates when an operator presses an accelerator pedal on the vehicle is determined by a combination of the inputs of other moving objects, the force that the vehicle operator presses an accelerator pedal and the number recorded accidents in the vicinity of the vehicle’s geolocation.

Click here to view the patent on Google Patents.

How to Search for Patents

A patent search is the first step to getting your patent. You can do a google patent search or do a USPTO search. Patent-pending is the term for the product that has been covered by the patent application. You can search the public pair to find the patent application. After the patent office approves your application, you will be able to do a patent number look to locate the patent issued. Your product is now patentable. You can also use the USPTO search engine. See below for details. You can get help from a patent lawyer. Patents in the United States are granted by the US trademark and patent office or the United States Patent and Trademark office. This office also reviews trademark applications.

Are you interested in similar patents? These are the steps to follow:

1. Brainstorm terms to describe your invention, based on its purpose, composition, or use.

Write down a brief, but precise description of the invention. Don’t use generic terms such as “device”, “process,” or “system”. Consider synonyms for the terms you chose initially. Next, take note of important technical terms as well as keywords.

Use the questions below to help you identify keywords or concepts.

  • What is the purpose of the invention Is it a utilitarian device or an ornamental design?
  • Is invention a way to create something or perform a function? Is it a product?
  • What is the composition and function of the invention? What is the physical composition of the invention?
  • What’s the purpose of the invention
  • What are the technical terms and keywords used to describe an invention’s nature? A technical dictionary can help you locate the right terms.

2. These terms will allow you to search for relevant Cooperative Patent Classifications at Classification Search Tool. If you are unable to find the right classification for your invention, scan through the classification’s class Schemas (class schedules) and try again. If you don’t get any results from the Classification Text Search, you might consider substituting your words to describe your invention with synonyms.

3. Check the CPC Classification Definition for confirmation of the CPC classification you found. If the selected classification title has a blue box with a “D” at its left, the hyperlink will take you to a CPC classification description. CPC classification definitions will help you determine the applicable classification’s scope so that you can choose the most relevant. These definitions may also include search tips or other suggestions that could be helpful for further research.

4. The Patents Full-Text Database and the Image Database allow you to retrieve patent documents that include the CPC classification. By focusing on the abstracts and representative drawings, you can narrow down your search for the most relevant patent publications.

5. This selection of patent publications is the best to look at for any similarities to your invention. Pay attention to the claims and specification. Refer to the applicant and patent examiner for additional patents.

6. You can retrieve published patent applications that match the CPC classification you chose in Step 3. You can also use the same search strategy that you used in Step 4 to narrow your search results to only the most relevant patent applications by reviewing the abstracts and representative drawings for each page. Next, examine all published patent applications carefully, paying special attention to the claims, and other drawings.

7. You can search for additional US patent publications by keyword searching in AppFT or PatFT databases, as well as classification searching of patents not from the United States per below. Also, you can use web search engines to search non-patent literature disclosures about inventions. Here are some examples:

  • Add keywords to your search. Keyword searches may turn up documents that are not well-categorized or have missed classifications during Step 2. For example, US patent examiners often supplement their classification searches with keyword searches. Think about the use of technical engineering terminology rather than everyday words.
  • Search for foreign patents using the CPC classification. Then, re-run the search using international patent office search engines such as Espacenet, the European Patent Office’s worldwide patent publication database of over 130 million patent publications. Other national databases include:
  • Search non-patent literature. Inventions can be made public in many non-patent publications. It is recommended that you search journals, books, websites, technical catalogs, conference proceedings, and other print and electronic publications.

To review your search, you can hire a registered patent attorney to assist. A preliminary search will help one better prepare to talk about their invention and other related inventions with a professional patent attorney. In addition, the attorney will not spend too much time or money on patenting basics.

Download patent guide file – Click here