Artificial Intelligence – Roberto Sicconi, Malgorzata Stys, Tim Chinenov, Telelingo dba Dreyev, Telelingo

Abstract for “Methods, systems, and methods for using artificial intelligence in order to monitor, correct, or evaluate user attentiveness”

“In one aspect, a system using artificial intelligence to monitor, correct and evaluate user attentiveness includes a forward facing camera. The forward-facing cam is configured to capture a field view on a digital display screen. A processing unit communicating with the forwarding camera and at least the user alert mechanism has a screen position to spatial map map and a motion detection analyser. The motion detection analyzer is designed to detect a rapid parameter shift on the screen and determine a screen place on the screen. It then retrieves a spatial map and generate the directional alarm using the screen location.

Background for “Methods, systems, and methods for using artificial intelligence in order to monitor, correct, or evaluate user attentiveness”

After a decade of steady but slow declines, car accidents are rising in the US. Safety cars and better driving aid equipment can help reduce accidents but distracted driving is more costly than all of these benefits. It seems that state bans on cell phone use in cars do not work. Distractions such as distracted calls and the use of mobile apps can be easily circumvented by mobile apps. The current solutions assess vehicle dynamics and monitor driving behavior. Driving risk is directly related to speed, braking and cornering. However, it does not take into account weather conditions or attention given by drivers to road conditions. It also takes into consideration traffic conditions and driver’s ability to manage the vehicle in unplanned situations. Driving assist solutions tend to ignore fatigue, stress, well-being, and ability to react in an emergency.

“A system that uses artificial intelligence to monitor, correct and evaluate user attentiveness is described in an aspect. The forward-facing camera can capture a video feed from a field of vision on a screen. A user alert mechanism is included in the system that can send a directional warning to a user. The system also includes a processing device in communication with the forward facing camera and at least one user alert mechanism. The screen location to spatial map is part of the system. The system also includes a motion detector analyzer that is integrated into the processing unit. It can detect a rapid parameter shift on the screen and determine its location on the screen.

“Another aspect is a method that uses artificial intelligence to monitor, correct and evaluate user attentiveness. A motion detection analyzer is used to capture a video feed from a field of view on a digital monitor. The motion detection analyzer and the digital screen can detect a rapid parameter shift. The motion detection analyzer can determine the screen location of the rapid parameter changes on the digital screen. The motion detection analyzer retrieves a spatial location from the screen location and creates a map based on that screen location. The motion detection analyzer generates a directional warning by using the spatial location. The method involves generating a directional alert using the motion detector analyzer.

“These and other features of non-limiting embodiments will be apparent to those skilled with the art upon reviewing the following description of specific embodiments of this invention in conjunction the accompanying drawings.”

“Embodiments described herein include an intelligent driver attention monitoring system. These systems may imitate the behavior of a reliable passenger, who can assess the driving context risk (in relation to current speed, acceleration and breaking, cornering, weather, traffic conditions, etc.) and then compare it with the driver’s attention level. Virtual passengers may alert drivers if they look away from the road too much or too often or if the car is zigzagging along the lane. For example, Embodiments can detect motion in video feeds. This may be used to alert the driver to sudden changes in motion, in a way analogous to or supplementing peripheral vision for distracted or vision impaired users.

“Over the past years, basic telematics have been introduced in order to encourage safe driving through Usage Based Insurance (UBI). The disclosure describes embodiments that may be an evolution of UBI Telematics Systems. They combine analytics of telematics data, driver behavior, and performance to calculate driving risk scores. The present invention provides personalized and real-time feedback to drivers to help prevent distractions from causing dangerous situations.

“Systems described herein can evaluate various factors, including but not limited to: (a) the attentiveness of people while performing tasks or communicating with another person such as a person/machine; (b) an estimate level of risk associated in a surrounding environment; or (c) a margin of attention between a level that is available and the level required for the task or communication. In some systems described herein, such evaluation can be used to provide feedback and/or generate useful feedback about the behavior of the person being observed. An evaluation may be used to provide suggestions to the person being observed to help them change their behavior to minimize or reduce risk. Artificial Intelligence (AI), which is a form of artificial intelligence, can be used to transform observed patterns into behavior profiles. It can also refine them over multiple observations and/or create group statistics for similar situations. One example of the applications of the methods described herein is driving risk profiling, and prevention of accidents due to distracted and drowsy drivers. These methods can perform driving risk profiling or prevention of accidents using machine Vision and/or AI to create digital assistants with copilot expertise. This disclosure may benefit all drivers, not just teenage drivers or elderly drivers. This invention may be of benefit to fleet management companies, car insurance providers, ride-sharing and rental car companies, as well as healthcare providers, to enhance, tune, and/or personalize services.

“Embodiments may be used to monitor driver attention and/or smart driver monitoring in order to address increasing problems of unsafe driving. They can cover a range of attention issues, from distracted driving to feeling drowsy on boring, long stretches of road. The apps and mobile devices are not designed to prevent distractions. They can ignore any stress levels that drivers may be experiencing and require full attention at split-second notice. Drivers who have sophisticated driver assistance may also be less likely to succumb to drowsiness, which is another major cause of fatal accidents.

“Embodiments described herein may also provide electronic driving record (EDR), implementation. Current monitoring solutions don’t support secure data access rights. They lack configuration flexibility and real-time feedback mechanisms. EDR is a method of allowing drivers to restore access rights dynamically during driving, or at the end a trip. This allows them to identify and/or specifically who has access to what EDR data, and/or when, and how to create precise driving behavior models. Health-related measures and log attention levels of the driver. Contrary to the current system embodiments, these may address privacy concerns associated with UBI data collection.

“Now, referring to FIG. FIG. 1 shows an illustration 100 that illustrates driver performance and emotional arousal. The illustration shows that driving performance is at its?normal? peak. Conditions, which are represented by a’safe zone? The central part of the illustration. The illustration shows that fatigue and drowsiness can lead to decreased performance and inability to respond to difficult situations. Excessive excitement, anxiety and fear may also lead to a reduced ability to drive correctly. What is a ‘driving risk’? A horizontal line that runs from the ‘too relaxed to the?too dangerous? Label to the ‘too excited’? Label, moves up or down very quickly sometimes, changing the attention’margin? Driver. Systems disclosed in this disclosure can continuously estimate the driver’s attention margin. In some embodiments, system may provide feedback for the driver to help him/her adjust behavior and avoid dangerous situations. Graphical illustration 100 may illustrate a Yerkes?Dodson law. This law is used by psychologists to link performance and arousal. According to this exemplary illustration humans are expected to drive more effectively when they are in the?Safe Zone’, which is free from distractions and excessive excitements.

Referring to FIG. Referring to FIG. 2, a process flow chart 201 illustrates a number of input and/or analyses steps 203-209 that can be performed by an AI 211. This includes without limitation an AI integrated in systems as discussed in further detail. There is also at least one output step 215 that an AI 211 might perform as described below. For illustration purposes only, AI 211 embodiments may act in a way analogous to a smart passenger or copilot that alerts a driver of hazards or other items of concern or concern on a road ahead. AI may be able to receive inputs from outside conditions 205 that indicate one or more conditions and phenomena outside of a vehicle being operated by a driver. This could include weather conditions, road conditions, weather conditions, behavior of other drivers and pedestrians, bicyclists and/or animals on the road or in any other area through which the vehicle or driver navigates. AI could also collect data analogous to “watching the scene”. Around and/or in front a vehicle/driver. A system or device implementing AI, an embodiment of AI 211, and/or a system or device implementing AI can perform one or more assessments to assess a level of risk 209 as a function outside condition inputs 205. For instance, using processes for risk assessment as described elsewhere in the disclosure. AI 211 can also receive driver-related information 203 by using one or more sensors, cameras, or other means that detect information about a driver. For example, an AI 211 receiving driver related inputs 203 could be characterized to use such inputs to monitor the driver. AI 211 can perform one or more analyses using driver-related inputs 203 in order to determine one or several facts about a driver?s current or future performance. For example, AI could determine the driver’s attention level 209 by using such inputs. AI 211 can combine the inputs and/or analysis results above with one or several elements of stored information. For example, AI 211 could determine the driver’s attention level 209 using driver-related inputs 203. AI 211 can use inputs, analysis, 203-209, and/or stored data to generate one or several outputs for driver. For instance, AI 211 could interact with driver 215, to inform him/her of the results of input/or analytical processes 203-209, or processes using and/or comparing stored info in such inputs/analysis 203-209. AI 211 can use or combine with machine-learning processes to adjust monitoring and reasoning to driver’s habits and preferences.

“Referring to FIG. 3. An AI 211, as described in FIG. 2. may be used to make one or more decisions regarding a driver 305 in a system 307 as a result of input and/or device 308-313; AI 211 can be applied on a processing unit 315. Any computing device described in this disclosure may be included in processing unit 315. Processing unit 315 can be any combination or combination of computing devices described in this disclosure. Processing unit 315 can be connected to the network described in this disclosure. The network could be the Internet. Processing unit 315 could include, for example, a first or clustered of servers at a first location, and a second or clustered of servers at a second location. Processing unit 315 could include computing devices that are devoted to specific tasks. For example, one computing device, or cluster of computers, may be used to operate the queues as described below. A separate computing device, or cluster, may be used to store and/or produce dynamic data, as explained in more detail below. Processing unit 315 could include one or more computing units that are dedicated to data storage, security and traffic distribution for load balancing. Processing unit 315 can distribute one or more computing tasks, as described below, across a plurality computing devices of processing device 315. These devices may work in parallel, in series or redundantly or in any other way used to share tasks or memory among computing devices. A?shared nothing’ implementation of processing unit 315 is possible. Architecture in which data is stored at worker; this may allow for scaling of system 100 or processing unit 315. Processing unit 315 could include a mobile and/or portable computing device, such as a smartphone, tablet or laptop. However, processing unit 315 can also include a mounted computing device on or inside a vehicle.

“With reference to FIG. “With continued reference to FIG. Processing unit 315 and any device that is used as processing unit 315, as described in this disclosure may be set up to repeat a single step or sequence until the desired outcome is achieved. Iterative repetitions may be used to reduce or decrement one or more variables and/or to divide a larger task into smaller, iteratively addressed tasks. Processing unit 315 and any device that is described as processing unit 315 may perform any step, sequence, or task as described in the disclosure. This includes performing multiple steps simultaneously using parallel threads, processor cores or other resources. Task division between parallel threads or processes can be done according to any protocol that allows for the division of tasks among iterations. After reading the entire disclosure, persons skilled in the art will know the various ways steps, sequences, processing tasks and/or data can be divided, shared or dealt with using iteration and/or parallel processing.

“Still referring back to FIG. “Still referring to FIG. Camera 308 can include any device that is capable of taking optical images of drivers using light on and off the visible spectrum. Processing unit 315 can receive and/or classify data about driver 305 via camera 308, including data describing driver305 and data describing the orientation of the driver’s eyes and/or face. To analyze driver’s attention, direction may be recorded and/or classified according to rotation angles (yaw/pitch, roll, and eyes lateral moves) 309 (road ahead; left mirror, right rearview mirror; instrument cluster, passenger seat; center dash; phone in hand); These data can be used to model driver behavior 310 using AI and/or machine/learning techniques as described in this disclosure. Data from vehicle dynamics, including acceleration, speed and rotations per minute (RPM), may be used to evaluate 311 vehicles. These data may be collected from any number of sensors. Processing unit 315 may be communicated with sensors using any wired or wireless protocol. For example, processing unit 315 may have one or more sensors embedded in it or in an associated smartphone. The sensors may include at least one road-facing camera that can detect objects and provide distance evaluation capabilities. Vehicle buses, such as OBDII or CAN buses, may also receive and/or collect vehicle dynamics data. Processing unit 315 or an AI implemented thereon can receive dynamic trip information 312, including without limitation traffic, weather, and other information via a network like the Internet.

“With reference to FIG. “With continued reference to FIG. Processing unit 315 can perform any of the following steps: classification, analysis, and/or processing methods. A machine learning process automates the use of a set of data called?training data? A?training data set and/or a training data body? To generate an algorithm that is performed by a computing device/module in order to produce outputs from input data. This is different to non-machine learning programs where commands are written in programming languages and determined by the user in advance.

“Continued reference to FIG. “3 Training data is data that contains correlations that a machine learning process can use to model relationships among two or more types of data elements. Training data can include multiple data entries. Each entry may represent a set data elements that were received, recorded, and/or created together. Data elements may be correlated through shared existence, proximity, or other means. Multiple data entries may show one or more trends between data elements. For example, a higher number of first-class data elements belonging to a particular category of data elements may be more likely to correspond to a higher number of second-class data elements belonging to a different category of data. This could indicate a mathematical relationship or proportion between the two categories. Multiple data elements could be linked in training data according different correlations. Correlations can indicate causal and/or predictive connections between data elements. These relationships may be modelled as relationships, such as mathematical relationships using machine-learning processes. Formatting and/or organizing training data can be done by using data elements. For example, data elements may be associated with descriptors that correspond to data elements. Training data can include data that is entered on standard forms by people or processes. For example, a data element may be assigned to a descriptor of a category. Training data can be linked to descriptors for categories using tags, tokens or other data elements. For instance, training data could be provided in fixed length formats that link data positions to categories like comma separated value (CSV) and/or self-describing formats like XML, enabling devices or processes to detect data categories.

“Alternatively, or additionally, but still referring to FIG. 3. Training data could include one or more elements not classified; that is, data may not have descriptors or be formatted in a way that allows them to be categorized. Machine-learning algorithms or other processes can sort training data according one or more categorizations. This could include tokenization, detection and correlation of raw data values, and natural language processing algorithms. A corpus of text may contain phrases that make up a number?n, for example. A statistically significant number of ngrams containing compound words (e.g. nouns modified or added by other nouns) may allow for identification of these ngrams. Such an ngram could be classified as an element of language, such as a “word?”. can be tracked in a similar way to single words and generate a new category through statistical analysis. In a data entry that includes textual data, a person’s identity can be determined by referring to a dictionary, list, or other compendium. This allows for ad-hoc categorization using machine-learning algorithms and/or automated association data in the entry with descriptors. Automated categorization of data entries may allow the same training data for multiple machine-learning algorithms. This is further explained below. Processing unit 315 can use training data to correlate any input data with any output data. Training data used by processing unit 315 may correlate any input data as described in this disclosure to any output data as described in this disclosure. This is an example that’s not limited to. One example is that a person might observe a number of drivers performing vehicular maneuvers, such as driving on a training track and/or on public streets. This data may then be combined with elements of driver-related data and/or vehicular dynamic data by a computing device like processing unit 315 to create one or several data entries of training data for machine-learning. Training data could also include sensor data that is recorded during, before and/or following an accident. This data can be combined with information about the circumstances of the accident (e.g., driver-related and/or vehicular dynamics data) to create one or more data entries of training data to be used in machine-learning processes. Training data could also be used to correlate data about conditions outside a vehicle such as road conditions, behavior and bicyclist behavior, as well as information regarding risk levels. Such data can be collected using systems such as the ones described herein. For example, data recorded during, before and after collisions or other accidents. This disclosure also contains other examples of training and/or correlations. Those skilled in the art will find many examples of training data that can be used in conjunction with the present disclosure.

“I am still referring to FIG. 3. Processing unit 315 can be configured and designed to create a machine learning model using techniques for developing linear regression models. Ordinary least squares regression is a type of linear regression model that aims to minimize squares between actual and predicted outcomes. This can be done according to a norm for measuring such differences (e.g. A vector-space distance norm; the coefficients of the resulting equation can be modified to improve minimization. Linear regression models can also include ridge regression methods. In these cases, the function to minimize is the least-squares function plus term multiplying each coefficient’s square by a scalar amount. This penalizes large coefficients. Linear regression models can also include least absolute shrinkage operator (LASSO), in which ridge is combined with multiplying each coefficient by a factor of 1 divided multiplied by twice the number of samples. Multi-task lasso models can be included in linear regression models. The Frobenius norm that is applied to the least-squares terms of the lasso modeling is the square root (sum of all terms) of the norm. Linear regression models can include an elastic net model or multi-task elasticnet model. A polynomial regression model is a linear regression model that uses a polynomial equation. A quadratic, cubic, or higher-order equation is required to provide the best output/actual output fit. Similar methods may be used to minimize error functions as described in this disclosure.

“Still referring back to FIG. 3. Models may be created using other or additional artificial intelligence methods. The process of “training” may create connections between nodes. The process of?training? may create connections between nodes. This involves elements from a training data set being applied to the input nodes. A suitable training algorithm, such as Levenberg Marquardt or conjugate gradient, simulated Annealing, or any other algorithms, is used to adjust the weights and connections between nodes in adjacent layers to produce the desired output values. Deep learning is also known as this process. This network can be trained with training data.

“Still referring a FIG. 3. Machine-learning algorithms can include supervised machine learning algorithms. The term “supervised machine-learning algorithms” is used herein to refer to algorithms that receive a set of inputs and outputs and attempt to determine if one or more mathematical relationships relate inputs and outputs. Each of these mathematical relations must be optimal according to some criteria specified by the algorithm via some scoring function. A supervised learning algorithm might include sensor data or data generated via analysis, as inputs, degrees risk and/or drivers inattentiveness, as outputs. The scoring function represents a desired relationship between inputs. For example, scoring function could seek to maximize the likelihood that a particular input and/or combination thereof is associated with an output to minimize the chance that a certain input is not associated. A scoring function can be described as a risk function, which represents an expected loss. An algorithm that relates inputs and outputs. Loss is calculated as an error function. This is a measure of how incorrect a prediction is when it is compared with a given input/output pair in training data. After reading the entire disclosure, persons skilled in the art will be able to identify various supervised machine-learning algorithms that could be used to determine the relation between inputs/outputs.

“With reference to FIG. “With continued reference to FIG. System 307 or processing unit 315 can determine that a driver is at risk based on inputs. Inattentiveness could be expressed as a numerical quantity, such as a score. A threshold value may be used to compare the score. An alert may be generated by processing unit 315 and system 307, depending on whether it exceeds (or alternately, or additionally falls below) the threshold level. This disclosure provides several examples of how alerts can be generated and/or the forms of alert output. System 307 and/or 315 may send an alert to an inattentive motorist using sounds or voice prompts. These are selected and paced according to the level of urgency. Non-limiting examples include system 307/or processing device 315 determining that the driver is asleep. System 307/or processing facility 315 can warn the driver by using verbal interaction 327, and/or provide mind-energizing short dialogs 322 to stimulate attention. To determine the length and depth of the conversation with the driver, system 307 and/or processing unit 315 may track the responsiveness of the user 325. This information can be used to add and/or update a machine-learning model and/or process to determine a new level of risk, attentiveness, or any other output. To communicate verbally with the driver, a microphone (optionally an array of microphones) 329 or a speaker (optionally a wireless speakersphone) 328 may be used. The driver may have biosensors 314 that monitor heart rate and galvanic response. Data may wirelessly be transmitted to a stress/fatigue tracking device 316 or algorithm within the system to provide additional driver information 317. This data can then be transferred to processing unit 315. An embodiment may include biosensors that monitor heartbeat rate, blood pressure, and galvanic skin reaction, as well as sensors monitoring parameters related to breathing. This can be used to increase accuracy in evaluating body fitness conditions like fatigue, stress, drowsiness and distraction.

Referring to FIG. 4. An illustration of an exemplary portion of a vehicle 402. Including a camera module 403 is shown. A driver-facing camera 4031 may be included in a unit 403. A driver-facing camera 4031 can be mounted to any vehicle component and/or structural element. For example, a driver-facing camera 4031 could be mounted near a rearview mirror. A unit 403 can include a processor 433. This may include any device that is suitable for use as processing unit 315, or any device that could be in communication with it. The processor 4033 may also include any other device. A forward-facing cam 4035 can be included in unit 403, which may be placed together with driver-facing cam 4031, or separately. In an embodiment, either driver-facing cameras 4031 and 4035 may be connected directly to the processor 4033, or both may be connected and/or in communication to the same processor 4033. The forward-facing camera can be attached or mounted to any vehicle structure. For example, the forward-facing camera 4035 could be attached to a windshield next to a rearview camera’s mount. Wireless connectivity can allow data transfer between unit 403, cameras 431, 435, and/or processors 4033, as well as between a processing unit 315, such as a smartphone, and unit 315. To provide the best view of a driver’s face and minimize interference with the road view, unit 403 can be attached to the rearview reflector (attached to windshield/body of rearview mirror). Unit 403 can contain a road-facing camera 435, a driver’s-facing camera 431, and a processing unit 433 to process the video streams and communicate 405 wirelessly or via USB with a mobile app on a 409 or another processing device.

Referring to FIG. Referring now to FIG. 5, a flow-process diagram shows how an attention monitoring device 502, which could be integrated in and/or in communications with system 307 as discussed above, may perform analysis using data extracted from the driver-facing cam. These data could include, but are not limited to, facial contours. For instance, processor 4033 or system 502 and/or processing device 315 may be able to identify eyes, nose and/or mouth in order to assess yaw pitch, roll, direction of eye gaze, eye lid closing patterns, and/or evaluate eye gaze direction, eye gaze direction, eye gaze directions, and/or eye gaz direction. A neural network may be used, in an example but not limited to, to extract parameters and determine distraction or drowsiness. Attention monitoring system 502 could detect the face and/or hands of a driver. System 502 may identify facial landmarks 505 and special regions 505 including without limitation eyes, nose and mouth. This information can be used to estimate head posture and eye gaze direction 506 and provide information about hands gripping the steering wheel. System 502 could interpret situations where the driver’s eyes and head are directed away, as a sign of distraction 507. System 502 can monitor a driver’s attention level 513 against a personalized behaviour model 515. Personalized behavior model 515 may also be generated using machine learning and/or neural net processes, such as using user data from system 502 or system 307 as training data. Alternately, or in addition, System 502 can compare attention levels to permissible thresholds. These thresholds may correspond to duration, frequency, or other patterns. 519 Warning alerts 500 may also be sent to the driver if a safety margin calculated from such models and/or threshold comparies is not adequate. If a driver is not distracted, but does show signs of drowsiness 509 then system 502 may begin evaluating driver attention 513 against user behavior models 515 and safety margins using the same flow as distracted driving monitoring. If the driver is not distracted 507 or drowsy 511, the system can continue to monitor the driver’s face and hands 503, 504, and perform the above steps iteratively.

Referring to FIG. “Referring now to FIG. 6, an illustration showing the yaw and pitch parameters used to measure space-rotation of a driver?s face is shown. Image 602 illustrates three parameters that are used to classify the orientation of a driver?s face. Yaw is defined as horizontal movement (left-to-right) of a driver?s face. Pitch is defined as vertical movement (up and down), such is an axis around which the driver?s neck or head might turn to nod?yes? Roll is defined as tilting the head side-to-side, leaning forward or back. In one embodiment, pitch and yaw may be used principally to detect distraction.

“Referring to FIG. “Referring now to FIG. 7, an exemplary embodiment a system 702 is used for analyzing attention margin to prevent unsafe driving and inattentive driving. System 702 could also include or be included in any system described in this disclosure. System 702 could include a camera. This camera may include any camera, set of cameras, or combination thereof, as described in the disclosure. For example, system 702 may include a USB-connected camera 705 that contains visible sensors. These sensors include near-infrared (NIR), red, green, and infra-red (RGB) sensors, and/or infrared sensors. System 702 can be used to extract facial or ocular features or orientation using any of the other sensor as described in this disclosure. System 702 could include one or several audio input devices, such as microphones or near-infra red (NIR) sensors. One or more audio output devices can include any audio input device as described in this disclosure. System 702 could include one or several audio output devices, such as speakers or one or two microphones. One or more audio out devices may include any of the audio output devices described in this disclosure. Audio input devices can be combined or disposed separately. For example, at most some audio input devices and audio out devices could be parts of one electronic device that is part of system 702. Audio input and audio output devices can be combined or placed separately in a speakerphone 703, which could be any mobile device, telephonic device, or other device that can act as a speakerphone. Speakerphone 703 may be used for positioning a microphone and speaker at a location in a vehicle to communicate with the driver, such as on a visor near the driver. System may also include a computing device 707 which can be any computing device described herein, without limitation a processor unit 315, as well as any other device capable of delivering audio input and output devices. A computing device 707 may include a laptop computer, or any other device that allows for computation to run analysis and/or computation. This disclosure may include any analysis, and/or computation. Computing device 707, for example, may perform context analysis, combine such context analysis results with features extracted from a smart camera, and provide feedback and/or additional outputs to the driver, including without limitation audio feedback. System 702 speakerphone 703 or computing device 707 may include additional electronic devices or components. These additional processes and/or abilities may include, without limitation, telemetry data and map/routing information, audio/video recording capabilities, speech recognition and synthesis for dialog interactions with the driver. After reading the entire disclosure, anyone skilled in the art will realize that any component or capability included in smartphone 711 can be placed in another device or device in system 702, such as speakerphone 703, computing device 707, camera 705, or any other special-purpose electronic device with these components and/or capabilities. Smartphone 711 and/or any other devices that include one or more of the capabilities and/or components mentioned above, can collect additional sensor information, including motion-sensing data, 3D accelerometer, 3D GPS location and/or timestamps, as well as sensor information. Any sensor information or analytical results can be transmitted in the form received, raw, or processed information to cloud 709. A cloud 709 is a remote storage environment and/or computing environment that is implemented on one or multiple remote computing devices. This cloud may be operated by third parties, offered as a service or in any other form or protocol. Remote devices may be geographically dispersed and/or localized. Any component of system 702 or any part thereof, including any computing device 707 and smartphone 709, speakerphone 703 and/or camera 705, can be configured and programmed to perform any method, step, or sequence method steps described in this disclosure. Remote devices may be geographically localized and/or dispersed. System 702 and/or components of 702, including any computing device 707 or smartphone 709, speakerphone 703 and/or camera 705, can be set up to perform one step or sequence of steps repeatedly until the desired outcome is achieved. System 702 or any component of system 702, including any computing device 707 or smartphone 709, speakerphone 703 and/or camera 705, can perform any step/sequence as described in the disclosure in parallel. This includes, without limitation, any computing device 707 or smartphone 709, any processor cores or parallel threads. Tasks may be divided between parallel threads or processes according to any protocol that allows for tasks to be split between iterations. After reading the entire disclosure, persons skilled in the art will know the various ways that steps, sequences, processing tasks and/or data can be divided, shared, or dealt with using iteration and/or parallel processing.

Referring to FIG. 8 illustrates an example of a system 803 that analyzes attention margin to prevent unsafe and inattentive driving. System 803 can be used to run all processing on a smartphone, such as a smartphone 811. The configuration of smartphone 811 can be done in any way that is compatible with processing unit 315, as described above. The smartphone 811 can be configured in any way that is compatible with the configuration of processing unit 315 described above. It may also be used to perform any method, step, or sequences of method steps, in any order, and with any level of repetition. Smartphone 811 can be programmed to repeat a single step, or sequence, until the desired outcome is reached. Iterative and/or recursive repetitions may be used as inputs to subsequent repetitions. The outputs from previous repetitions may also be used as inputs to further repetitions. Smartphone 811 can perform any step or sequence described in this disclosure concurrently, including simultaneously or substantially simultaneously performing one step twice using two or more processor cores, parallel threads, or other processing units. Task division between parallel threads or processes may be done according to any protocol that allows for the division of tasks among iterations. After reading the entire disclosure, persons skilled in the art will know the various ways steps, sequences, tasks and data can be divided, shared or dealt with using iteration and/or parallel processing.

“Still referring at FIG. 8. Smartphone 811 can be connected, for example and without limitation, via a USB OTG to an input device. As an example, smartphone 811 could be connected to a visible+NIR cam 807. Smartphone 811 can connect to one or several components that provide vehicular analytics, data, and/or data. This may be done according to any description of collecting vehicular analytics, data, and/or data. For instance, smartphone 811 could connect to an optional OBD II and cellular-connected WiFi Hotspot 809. These devices may provide additional information from the vehicle bus (OBD II/CAN), and/or an alternative way to transfer data to the cloud (for instance, as shown in FIG. 7. System 803 can include audio input/or out devices as described above. This includes, without limitation, an optional Bluetooth speakerphone 805; the quality and loudness system-generated alerts may be improved and a better-positioned microphone to improve speech recognition accuracy.

“In an embodiment and with continued reference at FIG. 8. System 803 may employ a smart camera with RGB and NIR sensors and an infrared LED scanner to extract facial and eye features from smartphone 811. A Bluetooth connected speakerphone 805, or audio input/output devices such as Bluetooth-connected speakerphone 805, can be used to position a microphone/or speaker on a driver’s visor. Smartphone computation can perform all processes described above. For instance, smartphone 811 might run context analysis and combine the results with smart camera 807 features to determine driver’s attention span and provide audio feedback or other outputs based upon such determinations. System 803 or smartphone 811 can also be used to collect and/or transmit telemetry data, map/routing information, cloud services, such as weather and traffic, audio/video recording capabilities, speech recognition and synthesis, and/or dialog interaction with drivers.

“FIG. “FIG. 9 is a schematic illustration that illustrates how the capabilities of embodiments herein exceed all existing solutions for usage based insurance. It provides richer and more precise services.”

Referring to FIG. 10 illustrates an example of the architecture of a system such as system 307 or system 702, or system 803, as well as system 803, as previously described. The system may include a driver attention model unit 1015. Any hardware or software module that is embedded in, connected to, and/or operating on any computing device, as described herein, may be included. Driver attention modeling unit 1015 can be used to analyze features 1008 that describe driver facial data. This includes closed eyes, yawning and eyes directed away from the road. These features 1008 are extracted using visual data such as video feeds from driver-facing cameras 1001; any camera that is oriented to record or capture data about drivers as described in this disclosure. Without limitation, driver attention modeling unit 1015 can be used to analyze features 1009 such as verbal responses and/or responses, removal of hands off a steering wheel or the like. Driver-facing cameras may include any camera oriented to capture and/or record data about drivers, such as speech and gestures 1002 extracted from the driver’s speech 1001 or audio input devices. Without limitation, driver attention modeling unit 1015 can be used to analyze features 1010 from biometrics sensor 1003, which may include without limitation wearable biometrics sensors and/or sensors embedded into vehicles. These features 1010 could include features indicative of or measuring fatigue, stress, reaction at startling or frightening events, and the like.

“I am still referring to FIG. 10. System may include a driving model 1016; driving model 1016 could include any hardware and software module that is operating on, incorporated into, or connecting to any computing device, as described herein. This includes without limitation processing unit 315. System may also include an accident detection/prevention device 1017. An accident detection/prevention device 1017 can include any hardware and software module that is compatible with, incorporated into, or connected to any computing device, as described herein. Driver risk model 1016 or accident detection/prevention 1017 may analyze features 1011 from a camera 1004 on the road. These features can include features that describe and/or depict vehicles ahead, pedestrians crossing the road, cyclists and animals, as well as road signs posts and trees. Driver risk model 1016 can use any of the algorithms described in this disclosure for detecting presence, estimated speed, and direction (towards the vehicle ahead or an adjacent lanes, potentially on a collision course), to issue early warnings, before complete classification of the objects ahead. The embodiments disclosed herein can maximize the time the driver has between an early warning and a possible collision. This allows them to take appropriate action (brake, swerve, etc.) to prevent a crash. Driver risk model 1016 or accident detection/prevention device 1017 may analyze features 1005 from a rear facing camera 1005. These features can include, but are not limited to, any features described above that represent conditions outside the vehicle and/or driver such as tailgating vehicles too close. Driving risk model 1016 or accident detection/prevention 1017 may analyze features 1013, which can be retrieved from telematics data 1006 and include speed, acceleration, cornering, engine loads, fuel consumption, and braking. Driving risk model 1016 or accident detection/prevention device 1017 can analyze features 1014 from ambient 2007 data such as traffic and weather information.

“With reference to FIG. “With continued reference to FIG. Decision Engine 1020 can evaluate attention 1015 against risk 1016, and/or historical short- and/or long-term data 1018 about the driver’s performance during past similar situations in order to determine what type of feedback to give to the driver. Evaluation may include without limitation any machine learning model or process as described below, such as using training data to correlating attention 1015 with risk 1016 to alert levels. If vehicle connectivity allows connection to the cloud, 1021 may store and/or update historical geographic data 1019. To avoid distracting drivers, information that is limited to normal 1022 attention levels, such information as information using a first color 1030 to indicate normal status may be transmitted 1025. A driver may receive a more disruptive and/or noticeable feedback signal if their attention level is low (1023). For example, the lights may emit acoustic feedback 1031 to alert them 1026. Alternately, or in addition, a second or different light color 1030 could be used for drivers with marginal attention levels 1023. For example, a yellow light may be added to the lights to call driver’s attention 1026. A further intrusive or escalated feedback signal can be generated if attention is not sufficient 1029. For example, an alarm driver alert 1027 may generate a pattern of visual and audible alerts 1032, which may escalate if the condition continues. The pattern could include a third colour representing a different color than first or second colors. An increase in the volume and light intensity of audio output devices may be an indicator that there is an escalating problem. A dialog interaction 1028 can be used depending on the severity and urgency of the problem to communicate quickly to the driver the identified countermeasure and to notify them. A pattern acoustic alert may include a sequence or patterns of sounds and voices. This could be audio, voice and song, or chirp. A pattern spoken warning could also include sequences and/or patterns. If attention is sufficient or normal, output could include a steady green feedback light. If attention is not sufficient, output might include a slow blinking yellow feedback lamp and acoustic alerts. Where attention is inadequate, output could include a fast blinking, red feedback light and spoken instructions for correction. The system may periodically update driving stats and calculate driver risk profiles. System may periodically update all important information to cloud for statistical analysis by/trip, by/area or by/population. You should note that terms such as?low?, or?marginal are not allowed. It should be noted that the terms?low?,?marginal?, or?insufficient’ are not intended to indicate a hard three-level of events. These thresholds do not represent a single scenario level. There may be multiple threshold levels that correspond to multiple alert levels.

“Still referring at FIG. “Still referring to FIG. To enable complete reconstruction of the scene before and after the crash, video and/or sound clips may be combined with time and/or location information. All recorded data, including without limitation audio and/or video clip, location, times, and/or sensors, can be uploaded to cloud services or devices and/or stored in local and/or distant memory facilities. Data triggered by inattention events, such as crash data, can be recorded in a driving record and analyzed over time to generate statistics in the form of a driver profile 1033 that can be used by fleet managers or insurance carriers. On request from the driver, or at times when it is safe, analytics may be used to present trends and driving performance reports to the driver. This can help to motivate the driver to keep doing well. The data recorded above, including data from inattention events-triggered, data collected before, during and/or following crashes, and any other data used in driving data record 1033 may be used for creating entries of training data. This may allow machine learning, neural net and/or AI processes and/or outcomes to create, modify and optimize models and/or outputs that determine any analytical and/or determined outputs or results. Models and/or AI processes may also be used to generate collision predictions based upon one or several sensor inputs or analytic based analytic or analytic inputs or analytic inputs or analytic or analytic inputs or analytic or analytic or analytic or analytic or analytic or analytic or analytic or analytic or based on alytic or analytic or analytic or analytic or analytic or analytic or analytic or ana /or ana or ana and/or ana and/or based models or the like For further processing, driving data records and reports can be uploaded to the cloud 1034.

“In operation, and still referring back to FIG. 10. System may use visible or NIR camera pointed at driver’s face to analyze head posture, track eye movements and/or record driver and passenger seats in case of an accident. Audio input devices, cameras and/or other sensors can be used to implement speech and gesture interfaces for drivers to request or provide information via microphone, face, or hand gestures. The driver’s emotional state, mood, health, attentiveness, and/or mood can be monitored and/or analysed using biometric and/or Vital Signs data. This data may include without limitation Galvanic Skin Response (GSR), and/or Heart Rate Variability (HRV) data. It can be provided via a wearable bracelet or sensors on the steering wheel, as well as wireless evaluations of heart beat and breathing patterns. A forward-facing camera may be used to determine lane lines, vehicle distances, scene analysis, and recording. System may also use a rear camera for viewing, analyzing, and/or recording at the rear. The system may track and/or associate data obtained using accelerometers and/or GPS facilities. Other data could include vehicle data, such as VIN, Odometer readings and measures of rotations per hour (RPM) and/or engine loads, for example and without limitation via an OBD II connector.

“With reference to FIG. 10. System may collect and/or utilize in analysis data regarding traffic and weather conditions, day/night lighting, road conditions and in-cabin noises or voices to make any decisions, training data and/or other processes as described in this document. The system may use visual clues to detect hand gestures and/or determine distraction, drunkenness or drowsiness. The system may extract spoken words using speech recognition, natural language process (NLP) or similar; speech-related feature removal may be used to detect altered voice. Biometric feature extraction can be used alternatively and additionally to detect physiological and/or emotional states, such as fatigue, stress, response to fear/surprise, or stress. Any sensor outputs or analysis may be used to extract features of objects, such as vehicles and signs, poles, signs and pedestrians. Any sensor output or analytical output can be used to extract vehicle position and speed. Feature extraction can be used to determine the driving smoothness or aggressiveness of another vehicle and/or its containing system. To determine the ambient level of?harshness, system may use sensors and/or analytic process outputs. “The impact of driving stress on the environment”

“Still referring back to FIG. 10. System may employ machine-learning processes to create and/or use models and/or train data to generate outputs. Machine learning can, as an example, be used continuously to evaluate driver attention level, continuously evaluate driving risk and detect and/or forecast vehicular collisions or crash conditions. The system may extract features from the past driving habits and/or driving skills of a driver. The system may extract features from dynamically and/or past reports of traffic jams, dangerous intersections or ice patches, accidents, or other relevant information, such as data connections to cloud servers, or similar. The system may create an intelligent decision engine that compares the driver’s attention level to the level necessary to manage the risk condition. This could be done without limitation using machine learning, deep learning and/or neural net models. Alternatively, or additionally, decisions and/or determinations can be made based on the driver’s past performance and adjusted for changes in the day, such as measures of anger, nervousness, and fatigue. System may also provide real-time ambient data updates via Cloud Services. For instance, a phone connection can be used to obtain information. System may determine that the driver’s attention level is sufficient or better for the driving task. In this instance, system might provide a status update to the driver, similar to what was described above. The system may determine that the driver’s attention level is low and give proactive information to the driver. The system could determine that the driver’s attention level is not sufficient to perform the driving task. It may also generate proactive information to the driver that the attention is inadequate and take appropriate action. System may inform the driver about the reasons and offer suggestions to correct the behavior. These steps can be executed using any component or process that is suitable as described in this disclosure.

Referring to FIG. “Referring now to FIG. 11, an additional illustration of an exemplary embodiment a system as discussed herein is illustrated. This may include or be incorporated into any system as described by this disclosure. An image processor 1150 may be part of the system. Image processor can include any computing device, such as any device that is suitable for processing unit 315 described above and/or any software or hardware module connected to it. An image processor can analyze video content from a driver facing camera 1140. This could include any device that is suitable for use as an camera, such as any of the above-described driver-facing cameras. Alternately, or in addition to data from an infrared scanner 1141 or similar, video content may include data from image processor 1150. This optional aid may allow for greater accuracy in face rotation detection in dim or non-lit conditions. Near IR or RGB cameras can be placed facing the driver and/or back of the passenger seat. An Solid-State LED scanner may be used for depth sensing or eye gaze tracking. It can also scan and/or record visual information of the driver, passenger, and other persons. Speech engines may be included, including systems that can recognize speech and/or synthesize speech. System may also include a module to manage dialog 1152 which analyzes voice and sound and generates audio and verbal prompts via the speakers 1145. Microphones 1144/or speakers 1145 can include any device that is suitable for audio input or output. An example of an audio input device may be a beamforming microphone array to provide driver vocal input, speech ID verification and ambient sound analysis. One or more speakers may be used to provide acoustic/voice feedback for the driver. Any light output device may be used to generate light, including a 470nm blue LED array for retina stimulation or alertness restoration. System may also include a multipurpose button, which can be connected to speech engines, dialog management components, and/or modules 1152 Multi-purpose buttons may allow system to change interaction mode, request assistance, or enter emergency protocols, depending on the context and/or number of times it has been pressed.

“Still referring at FIG. 11. system may contain a main processing device 1151. This could include any computing device described in this disclosure, as well as any device that can be used as processing unit 315, as described above. The main processing unit 1151 can perform processing functions such as those created by image processor 1150. This includes without limitation detection and/or description head rotation, eyelid closure, mouth opening or the like. Main processing unit 1151 can process video from a roadway facing camera 1149. This may include any device that is usable as a camera according to this disclosure. It may also classify and identify objects ahead of the vehicle. Main processing unit 1151 can collect GPS 1157 information in order to geo-stamp all events and/or calculate speed. Main processing unit 1151 can collect and/or process 3D accelerometer and/or 3D gyroscope information 1158 to determine vehicle movement and forces involved with motion, collisions or the like. To control communication with the driver, main processing unit 1151 can interact with speech engines 1152 The activation of a stimulant light may be controlled by the main processing unit 1151. This could include, but is not limited to, a 470nm blue LED 1146 for attention stimulation. A multicolor LED ring lamp may also be used to provide visual feedback to the driver. The main processing unit 1151 can collect data from biosensors 1147 in order to determine fatigue and stress levels. To collect additional information about the vehicle, main processing unit 1151 can communicate with an OBD II connected device 1148. When connectivity allows, main processing unit 1151 can process electronic data records and synchronize to a cloud 1156. Main processing unit 1151 can connect to a smartphone or wireless speakerphone to receive route, traffic, weather information and interact with the driver.

“With reference to FIG. 11. system may contain a system memory 1152. This memory may be any type of memory storage device or component described in this disclosure. Main processing unit 1151 may use system memory 1152 to store processing information. To store processed information, system memory 1152 may be used by main processing unit 1151. In addition to accelerometer/gyroscope/compass information 1158 being available in the main unit as described above, system may process similar information using a mobile app installed on a phone 1155 or other mobile device. Phone 1155 and other mobile devices may return information about the relative motion of the phone 1155 within the vehicle. This is used, for example, to determine possible distraction conditions if the driver holds the phone. Any wired or wireless communication protocol can be used to communicate with any component of the system, such as Bluetooth LE, Bluetooth 3.1, Bluetooth LE, WiFi or Bluetooth LE.

Referring to FIG. “Referring now to FIG. 12, an exemplary embodiment of a system that may include or be included in any system as discussed above is illustrated. A driver-facing camera 1201 may be used to create a video stream for a feature analysis unit 1217. Any computing device that is suitable for processing unit 315 and/or any module of hardware or software included in, operating on, or in communication with such computing devices may also be part of the system. Deep learning, neural net learning and/or machine-learning may be used to extract information about the head, eyes, and/or lids. The features can then be analysed by a drowsiness analyze unit 1219, or a distraction analysis un 1221 which may determine their respective severity levels. Each of the drowsiness and distraction analysis units 1219 and 1221 can be used as any computing device described herein. A driving risk estimation engine 1223 may be included in the system. It can be implemented using any computing device described herein, including any device that is suitable for processing unit 315 and/or any hardware/or software module that is incorporated into, operating on, or in communications with such a device. Driver risk estimation engine 1223 can provide information about vehicle dynamics 1203, traffic/weather/road condition 1205, GPS/route info 1207 and/or road facing cam 1209 to help determine risk and escalate urgency 1225 if the driver does not take action. The driver’s skills and experience may be used to calibrate risk estimation using machine learning and precomputed risks models 1227.

“With reference to FIG. “With continued reference to FIG. 12, system may include main decision engine 1233. This may be any computing device as described in this invention, including any device that can be used as processing unit 315 and/or any hardware/or software module that is incorporated into, operating on or in communication with such computing devices. The main decision engine 1233 can be used to gather information about distraction levels 1221, sleepiness levels 1219, and/or risk levels (1225) as described above. This may include taking into consideration user preferences 1241, leveraging mandated behaviour guidelines 1237, 1239, and relying upon decision models 1243, Machine Learning, which may all be implemented according the procedures described in this disclosure to determine what messages are to be sent to the user. A dialog interaction engine 1235 may be included in the system. It may be any computing device described herein. This includes without limitation any device that can be used as processing unit 315 and/or any hardware/or software module that is incorporated into, operating on, and in communication with such computing devices. The decision engine 1233 may trigger dialog interaction engine 1235 to send prompts to a driver to use sound and speech synthesizer 1231 to drive speaker array 1215. Mics 1213 may record driver’s reactions, comments, and requests to create actionable texts via speech recognition and NLP 1229. This will help to evaluate driver responsiveness. Dialog interaction engine 1235 can use dialog history 1245 to assess and restore driver’s attention. It may also use trip data 1249, short-term driver information 1247 and/or driver skills 1250 to determine the type, pace and length of the dialogue. Long-term driving data 1250, statistics about dialog interactions 1235, may be used to assess driver performance effectiveness. This includes, without limitation, the ability to take corrective actions in an appropriate manner, responsiveness to system generated guidance 1252 and compiling driving risk profiles 1255 and driver performance trends 1255. The blue LED light 1211 may be used by the dialog interaction engine to produce brief blinking patterns to assess driver alertness. “Imitating light patterns with corresponding blinking eyelids.

Distraction detection may be performed by embedded systems such as the ones described above. Distraction detection can be performed by glancing at a central-stack screen display, touching a center-stack screen display, reading text on cradle (hands-held), touching the screen on mobile phones (hands-held), text-typing using mobile phones (hands-held), eating, drinking, smoking, interactions with other passengers and singing, combing/shaving hair and/or applying make-up.

“Embodiments may be used in drowsiness detection using the systems described herein, including Stages 1-2-3. The embodiments of the system described herein can perform driver identification. This includes without limitation visual face analysis, voice ID verification and/or driving style behavior. The embodiments of the systems described herein can detect passenger presence and behavior. This includes, but is not limited to, analysis of passengers’ interactions. Embodiments of systems as described herein may perform theft detection and recording, engine-off vibration analysis, low-frequency video sampling, driver detection forward scene analysis, forward distance/collision detection lane departure detection, vehicles recognition and distance/delta speed measurement driver risk profiling, tailgating, late braking, hard braking, hard acceleration smooth cornering, smooth acceleration, gradual braking lane-keeping accuracy, swerving, eyes-on-road vs. mirrors vs. elsewhere ratio context risk evaluation, acceleration/deceleration/cornering speed (relative to posted limit), travel-time distance from vehicle(s) in front traffic and accidents ahead, time/distance to exit ramp, and/or weather and temperature conditions road pavement conditions”

“Embodiments described herein may use machine-learning to create models that can interpret a multitude of data in a vehicle and make real-time decisions. Initially thresholds for acceleration (longitudinal, lateral?acceleration/braking, cornering), speed, distance (in sec) from vehicles ahead, distance from lane markings, times for eyes taking away from watching the road to check rearview mirror, instrument cluster, center dashboard, . . . The criteria for speed, skill, and the like can be determined using simple rules and common sense values, and/or values derived from earlier iterations with other vehicles and/or drivers. To fine tune initial models, data recording may be done in driving simulators. Convolutional neural networks will then be used to extract visual characteristics from drivers’ behavior.

“As the system is being tested and data collected may be done on the road, statistical models that describe driver behavior may be improved through machine learning. These models may include multiple sensors (motion visual, audio, biosensors), which can be correlated with multiple sensors (audio, motion, visual, biosensors). Experts may initially annotate data, which could include information from drivers who were observed. As time goes by, processes can become less monitored. A subset of driving data can eventually be used to feed self-learning algorithms that will ensure continuous improvement of the system.

“Description of EDR.”

A vehicle driver may have data, including an electronic driving record (EDR), that they have accumulated while driving. The driver has the power to suspend/revoke certain access rights at any moment, even while driving. The driver can restore access rights dynamically during driving, or at the end a trip. Before information is synchronized with network (if it is not connected live), the driver can also suspend/revoke select access rights. The core concept of driver data management involves the generation, maintenance, and distribution EDR according to data owner preferences and insurance company incentives. EDR Bank can provide driving data to those who are the owners of the data. Car insurance companies may have access rights to the data. EDR access conditions include what type of data is provided. When the data is given, to whom it is provided, and under what conditions. Two types of data can be collected while driving: data from the car and data external to it. Driving data can include driver id, location and speed as well as acceleration, braking and cornering. crashes). EDRs can store any or all driving data. This includes live and/or historical data. Later, these data can be processed and analyzed, and then distributed. These embodiments may allow drivers to control who, what, and when access to EDR information. EDR data can be stored in secure cloud storage. For instance, it may be compiled from data uploaded to cloud services. Data access may be managed by drivers and other authorized users, including without limitation insurance carriers and healthcare providers. EDR data may be owned by users, such as drivers or fleet owners. They may also have the power to determine who gets access to it. Alterations in insurance benefits may be triggered by changes in access rights. A driver can authorize data access on a continuous basis up to the time authorization is revoked. To negotiate service terms, sharing individual parameters can be used (e.g. Discounts on car insurance premiums may be possible. This could lead to individual financial penalties or discounts. Individual car insurance companies may require that shared parameters be grouped to meet their requirements. In order to offer competitive insurance services, car insurers might be invited to compete among themselves based on the EDR parameters. This may limit the options to drivers who are willing to share.

“EDR” may be implemented in the following exemplary embodiments: EDR may be managed using a mobile app on a smartphone that is owned by a driver. It may also be stored on a dedicated device mounted on the windshield or dashboard, or on a standalone unit mounted on the dashboard (e.g. Android Auto, CarPlay, or Infotainment System. In order to adjust driving behavior and style, a driver can be alerted when certain thresholds are exceeded. Drivers can determine EDR and read access rights via other mobile apps. This includes other parties such as the manufacturer of the vehicle or Tier-1 suppliers. UBI car insurances The configuration process can be done on a computer/laptop, via Web, or on the mobile device that contains the EDR database. This allows for the selection of access rights by specific parties to certain sets of parameters. A smartphone or other computing device may be able to securely and/or encryptedly synchronize EDR content (live and historical) to a cloud service according to the above. This may give the insurance company no information regarding risk, but may expose the driver to unapproved monitoring. The collected information could be very limited if the driver does not speed or brake frequently. If there are no driving offences, it may even be zero.

“An overview and summary of the embodiments disclosed herein might include methods to automatically monitor driver distraction and generate context-aware safety reminds. The embodiments disclosed herein may use visual stimulation (via HUD), to assess driver’s attention, responsiveness and focus. The embodiments disclosed herein can be used to make decisions based on a variety of inputs, including user connection state and user location, user locale, associations learnt through prior observation, which are not explicitly specified by the user, as well as external factors such weather, destination status, transportation factors, weather, and other factors. The embodiments disclosed herein can include visual analysis of the scene ahead to verify driver’s attention (Exit Sign Question). These embodiments may be used to automatically learn the user’s behavior. These embodiments may also include the ability to poll users for feedback.

“Embodiments described herein could include a program that the user might choose to add to their application portfolio. The setup dialog may allow you to configure your device using the embedded methods. The embodiments disclosed herein can be used to modify the device, such as adding, removing or changing. These embodiments may also include methods for analyzing patterns and allowing users to review them and make modifications. These embodiments may also include methods to analyze reminders. These embodiments may also include methods to identify redundant reminders that can be discarded.

“Embodiments described herein could include methods to identify reminders that are not in accordance with the situational context. These embodiments may also include methods to detect reminder gaps. These embodiments may include methods for analyzing inside and outside video to reconstruct accidents or assess damage.

“Embodiments disclosed herein may include means of using a clamp on box containing camera/lights/LEDs/mike plus detached camera facing forward. Modified rearview mirrors with translucent glass may be one of the embodiments disclosed herein. Some embodiments disclosed herein could include the use of correlation between multiple cameras.

“Embodiments described herein could include methods to analyze and monitor driver drowsiness. Blue light may be used to reduce melatonin levels to combat drowsiness while driving. Embodiments disclosed herein may include means of using colored lights and acoustic feedback on attention level and attention triggering events (red-yellow-green), using constant/intermittent pattern, and/or using intensity adjusted to internal illumination level.”

“Embodiments described herein could include methods to monitor cabin behavior of passenger and driver in order to flag potentially dangerous behaviors. These embodiments may be used to identify dangerous behaviors and take appropriate action, including without limitation through sending alerts. These embodiments may include methods to identify dangerous objects, such as weapons, and take actions such as sending alerts without limitation.

“Embodiments described herein could include methods for detecting potentially hazardous health conditions. The embodiments disclosed herein could include wireless bracelet recording of HRV or GSR. The embodiments disclosed herein could include wireless (non touch) measurement of HRV or breathing. These embodiments may be used to collect bio/vital information to use with onboard diagnostics to identify situations that need specialist attention. Some embodiments disclosed herein could include the provision of an automated virtual personal nurse? Assistance to driver with chronic condition (recommended actions, monitoring for known risk conditions). The embodiments disclosed may include audio-visual speech recognition methods to increase robustness in noisy environments.

“Embodiments described herein could include methods to improve driver risk evaluation based upon changes in motion energy while they brake (same deceleration at higher speeds much more risky than at low speeds). Some embodiments may also include virtual coaching methods (e.g. Keep proper distance, avoid slowing down late, keep in right lane and stay in center of lane, optimize your turn trajectories). These models were developed using data from professional drivers and large numbers of drivers living in the same area. The methods disclosed herein can be used to analyze cloud of eye gaze tracking points in order to predict driver alertness. This allows drivers to distinguish between fixations that are caused by high levels of interest and those that are caused by drowsiness/lack attention. Embodiments disclosed herein may include methods of using Vehicle-to-Vehicle (V2V)-like information exchange social networks such as Waze, to inform nearby drivers about fitness/distraction/drowsiness/ . . . To increase safety margins (distance between cars, greater attention to unanticipated maneuvers, etc.) of a driver. . . ). The embodiments disclosed herein could include methods to extend driver attention monitoring for use in trucks, motorcycle helmets and trains (for conductors), as well as planes (pilots). These embodiments may include methods to extend driver attention evaluation at home, in the office, and at schools (education). Some embodiments disclosed herein include audio-visual recognition methods to detect suspicious activities in the cabin (e.g. “Screaming voice in association with body movements across the seats”

“Embodiments described herein could include methods for usage-based privacy and insurance security, including methods of collecting driver data to automatically monitor driving context. Monitoring of context includes detection and attention of driver behavior, car parameters, internal or external, and driver behavior. The embodiments disclosed herein could include a program to monitor driver behavior. The embodiments disclosed herein can include hardware and software that measure driver attention. Some embodiments disclosed herein could provide driver feedback in real-time. methods to automatically learn the user’s behaviour. Embodiments disclosed herein may include means to automatically poll user to evaluate responsiveness/alertness in presence of symptoms of drowsiness. These embodiments may also include methods for managing data access policies to set preferences for drivers and provide feedback. The embodiments disclosed herein could provide a means for defining data access rights with varying degrees of flexibility. These embodiments may allow for dynamic data access rights. Some of the embodiments disclosed in this document may allow for the suspension/revocation of certain access rights, even while driving. Access rights can be restored dynamically by the driver while driving, or at the conclusion of a trip. The disclosed embodiments may address different aspects of EDR data and its varied nature (data within and outside the car, driver behavior). The driver may be able to identify which EDR data is available and when. Some embodiments disclosed herein could include the use of sensors to collect and analyze driver behavior data. These embodiments may also include the creation of driver behavior models and attention models. “Embodiments disclosed herein can include the ability to process EDRs dynamically, and grant access rights for EDR data on-the-fly.”

“Embodiments described herein could include methods for delivering the product of the above embodiments. Some embodiments disclosed herein could include methods to collect and store driver data. These embodiments may also allow for the analysis of driver data in real time. The disclosed embodiments may allow insurance companies to bid for the driver’s company based on privacy settings. These embodiments may allow for insurers to compete for the driver’s company. Drivers may be able, given certain private data, to choose the best-suited insurer or combine insurance companies on the spot. These embodiments may allow policy premium pricing based on hour coverage, driving habits (e.g. For car rentals. The embodiments disclosed herein can perform dynamic control logic that determines multiple access patterns for the same user depending on the context. The embodiments disclosed herein could allow an insurer to bypass information protection locks that have been set by the driver in certain circumstances (e.g. It detects serious accidents and verifies that the driver is able to consent to disclosure of location. Police and ambulance can be used to arrive at the rescue. In the event that the driver is unconscious, there may be insurance to bypass the lock mechanism. This could save the driver’s life. The driver may be able to temporarily or quickly change privacy settings in emergency situations using the embodiments disclosed herein.

“Embodiments described herein may include methods for providing Qi wireless charging devices attached to car windshield: transmitter wire embedded in windshield at array at top or bottom to allow multiple devices to be used or to enable multiple positions. The receiver coil can be attached to the device via a docking support and sucking cup. smartphone).”

“Embodiments disclosed herein may include methods to anonymize video recordings in car while preserving attention/distraction/drowsiness extracted features. Face of driver may be analyzed for head pose pitch/yaw/roll, eye gaze tracking left/right/up/down, independent eyelids closing frequency/duration/pattern, mouth shape/opening/closing, lips shape/opening/closing. All the collected features can be used to control rendering a synthesized facial, in sync or short delay with the original expressions and movements. The hybrid video may also include real and synthetic contexts, as well as the driver’s actual face shape/opening/closing, and lips shape/opening/closing.

“Embodiments may be used to evaluate user attention when they listen or watch an advertisement message. Visual and audio analysis of a user’s reaction to a message may be used to rate pleasure/satisfaction/interest or lack thereof and confirm that the message has been received and understood, or not; this may be particularly useful in a contained environment like a car cabin (especially in a self-driving or autonomous vehicle) but may be extended for use at home or work, where one or more digital assistants have the ability to observe a user’s response to ad messages. It can also be used on mobile phones with the front-facing camera, subject to certain limitations.

“Embodiments described herein may include methods to evaluate user responsiveness to guidance/coaching to determine if communication strategy works. A short-term evaluation (a few dialog turns) may demonstrate the ability of the user’s attention deficit to be corrected and to regain control over their minds. Evaluation may be used to assess user coachability and behavior improvement over the long-term (days-weeks). The system may prompt users to use the system more frequently and with fewer prompts.

The “Embodiments” described herein are intended for use in a car but can also be used on mobile devices, home and work.

“Embodiments described herein may include methods to automatically notify of changes in driving regulations (e.g. Speed limits, no turn-on red signal, limited or non-use of cellphones, or partial functions. . . ) When crossing state borders. Communication can be verbal or written. However, users may request clarifications. Driver monitoring systems may share changes in traffic regulations to encourage compliance with these rules.

“Embodiments discussed herein can enable safer driving by giving real-time feedback to the driver about potentially dangerous conditions to avoid accidents due to inattention or impaired health. These embodiments may be advantageously complete due to the holistic nature of real-time data analysis (driver’s face, eyes, health condition, and outside contextual data). To model the dynamic context in multiple dimensions and provide accurate feedback on the recommended actions in real-time, it may be necessary to collect extensive data and to develop sophisticated algorithms to create personalized models that will benefit the driver and keep him/her safe. The holistic data analysis may include biosensors. The multi-dimensional context of the driver’s stress and fatigue can be determined using visual inputs and telemetry data. These embodiments may be used to evaluate the driver’s attention and driving skills. They can also help to identify unusual driving behavior and adapt to it.

“Embodiments, systems and methods described in this disclosure can save lives by monitoring and modeling drivers’ attention to avoid distractions and drowsiness. By creating driver profiles, embodiments can help insurance companies to distribute insurance premiums more fairly. Embodiments can be used by rental and ride-sharing companies to monitor and model driver and passenger behavior. Fleet management companies may be able to efficiently manage their trucks fleets with Embodiments by modeling and monitoring driver behavior and receiving real-time feedback. Embodiments can help parents of teenager drivers to keep them safe and lower their car insurance by monitoring their driving habits and applying virtual coaching. Embodiments can help protect driver and passenger data and allow each user to grant permissions to use it. They control generation, maintenance, and distribution of EDR according to data owner preferences and incentives. Embodiments can be used to monitor drivers’ health and detect any emergencies in advance. Embodiments can be used to facilitate the transition to self-driving cars.

“In short, embodiments have the potential to save lives. Embodiments can make driving safer and more affordable, as well as making insurance rates less expensive. Embodiments can improve driving style and performance scoring. Embodiments can provide safety and protection for drivers. Embodiments can help with brand risk management, risk mitigation, and safety. Embodiments can keep novice and teenage drivers alert. With Virtual Coach, Embodiments can improve driving skills. With opt-in technology, Embodiments can keep driver and passenger data safe. Maintenance and distribution of EDR is done according to the preferences and incentives set by the insurer. Embodiments can monitor drivers’ health to detect emergency situations, particularly for those with chronic conditions or the elderly. Embodiments can make it easier for car-drivers to hand over their vehicles in a safe and secure manner. Embodiments can provide an accurate and effective evaluation of driving risk. They evaluate driver’s performance against traffic and road conditions. Embodiments can give advance warnings to drivers and provide feedback to car insurance companies. Embodiments can also assess a driver’s response. Embodiments can reduce the chance of accidents and may encourage good behavior from drivers and passengers.

Referring to FIG. 13 illustrates an example of a system 1300 that uses artificial intelligence to monitor, evaluate and correct user attention. System 1300 can be added to or included in any system as described in FIGS. 1-12. One or more of the elements of system 1300 can be deployed in a vehicle according to the above description. Alternately, or additionally, system 1300 could be implemented in a self contained unit that can be carried by a user as they walk, operate a vehicle, or do other activities. An embodiment of system can augment or simulate the initial attention-direction processes used by humans to detect apparent motion using peripheral vision, and then direct focal gaze towards that motion. The brain has specific cells that can focus on images taken by our eyes, even unintentionally, in order to determine which areas require more attention. System 1300 could detect apparent motion in a field, such as one captured by a camera. This disclosure may alert the user to the apparent movement. System 1300 could emulate peripheral vision detection by notifying the user immediately of any apparent motion detected. This alert may be done before the user performs further steps like object identification, classification, collision detection, or the like.

“With reference to FIG. 13. System 1300 also includes a forward facing camera 1304. Forward-facing camera 1304, which is used in this disclosure is a camera that is oriented away form a user. Such imagery captured by forward facing camera 1304 is indicative of conditions outside of the user’s control. It may also be held or mounted in front of a person who is standing, bicycling, or walking forward. Forward-facing camera 1304 may be used in any way that is suitable for use as either a camera or a camera/camera unit, as described here. 4. This includes a front camera 4035, and/or rear cam 4031. It also includes a near IR camera (as described above in reference FIG. 5. A camera 705 described in FIG. 7. A camera 807 described in FIG. 8. A road-facing camera 1004 or rear-facing cam as described above in regard to FIG. 10. A forward-facing camera 1149, as described in FIG. 11 and/or a camera that faces the road 1209, as described in FIG. 12. Forward-facing camera 1304 could include a camera that is integrated into a mobile phone or smartphone, as described above.

“Still referring back to FIG. 13 Forward-facing camera 1304 can capture a video stream; it may also include any sequence of video data, as described in the reference to FIGS. 1-12. A video feed can include a sequence of samples and frames of light patterns that have been captured by forward-facing camera 1304’s light-sensing or detection mechanisms. These frames and/or samples may then be displayed in a sequential order, creating a simulation for continuous and/or sequential motion. Forward-facing camera 11304 is designed to capture video feed from a field. The field of vision is the area within which forward facing camera 1304 captures visual information. For instance, forward-facing cameras 1304 can focus light onto forward-facing elements 1304 using lenses and other optical elements. Forward-facing camera 1304 can capture video feed on a 1308 digital screen. A ?digital screen 1308? This disclosure uses a data structure that represents a two-dimensional spatial array consisting of pixels. Each unit of optical data is sensed by the camera’s optical sensors. Each pixel can be linked to a set of two-dimensional coordinates (e.g. Cartesian coordinates).

Refer to FIG. 13. System 1300 contains at least one user alert mechanism 1312. A device or component that can generate a user-detectable signal (e.g., a visual, audible and tactile signal) is at least one of the components comprising the user alert mechanism 1312. Any mechanism that signals to and/or engages attention of a user may be considered an alert mechanism. A speaker 328, all devices that produce sounds and/or vocal prompts 321, phone 409 speakers, display, lights and/or vibrators, as well as any device or component capable of generating light colors 1030, 1031, 1032, 1032, 1031, 1032, 1032, 1032, 1032, 1032, 1032, 1032, 1032, 1032, 1032, 1033, 1032, 1032, 1032, 1032, 1032, 1031, 10321, ring, speaker array 1215, 1215 and/or blue light/or color LEDs 1211 and/or blue light/or color LEDs 1211 and/or blue light and/or blue and/or LEDs 1246, 1211 and/or blue and/or blue and/or blue and/or blue and/or blue and/or blue and/or blue and/or blue and /or blue and/or blue and/or blue and/or blue and/or blue and/or blue and/or 1215 and/or 1215 and/or 1215 and/or 1215 and/or 1215 and/or 1211 and/or 1215 and/or 1211 and/or 1211 and/or 1215 and/or 1215 and/or 1211 and/or 1215 and/or 1215 and/or 1211 and/or 1211 and/or 1211 and/or 1215 and/or 1211 and/or 1211 and/or 1215/or 1211 and/or 1215 and/or 1215 and/or 1215 and/or 1215 and/or 1215 and/or 1215 and/or 1216 and/or 1215 and/or 1215 and/or 1215 and/or 1215 and/ or 1215 and/ or 1215 and/ or 1215 and/ or 1215 and/15 and/ or 1215 and/ or 1215 and/ or 1215 and/ or 1215 and/ or 1216 respectively. An alert mechanism can include headphones and/or headsets connected to a mobile phone or other computing device. A user alert mechanism 1312 can be configured to send a directional warning to a user. A directional signal can be any signal that indicates to a user in which direction they should focus, or to which point they should turn their gaze.

“Still referring back to FIG. 13. System 1300 contains a processing device 1316. Processing unit 1316 can include any computing device described in this disclosure, such as a processor unit 315 or similar. Processing unit 1316 communicates with forward-facing cameras 1304 and at most a user alarm mechanism 1312. Processing unit 1316 can communicate with forward-facing cameras 1304 and/or a user warning mechanism 1312 by using any wired or wireless connection and/or communication protocol, as described in the reference to FIGS. 1-12. The processing unit 1316 can be configured and/or designed to perform any method, step, or sequence, according to any embodiment in this disclosure. It can also be used in any order with any level of repetition. Processing unit 1316 can be set up to repeat a single step, or sequence, until the desired outcome is reached. Iterative repetitions may be used as inputs to subsequent repetitions. Processing unit 1316 can perform any step or sequence described in this disclosure simultaneously, including simultaneously or substantially simultaneously performing one step twice using two or more processor cores, parallel threads, or other processing units; tasks may be divided between parallel threads or processes according to any protocol that allows for the division of tasks among iterations. After reading the entire disclosure, persons skilled in the art will know the various ways steps, sequences, processing tasks and/or data can be divided, shared or dealt with using iteration and/or parallel processing.

“With reference to FIG. 13. System 1300 also includes a screen location and a spatial location map 1320 that is operated on the processing unit 1316. The screen location to the spatial location map 1320. This disclosure describes a data structure that links locations on a screen 1308 to locations within a field of vision. It allows a system to directly retrieve a spatial location from a screen location to a spatial location without the need for computer vision or classification tasks like object identification, edge detection, and the like. As used in this disclosure, spatial location may refer to a place in three-dimensional space. This could include, without limitation, a location that is defined in a Cartesian coordinate system or a threedimensional polar coordinate set, and/or using vectors in a 3-dimensional vector space. A location in a projection from three dimensional space onto three dimensions such as a two-dimensional Cartesian and/or vector-based coordinate systems and/or a vector direction, as well as any other three-dimensional Any structure that is useful in retrieving one datum from another may be called data structure. This includes databases, key-value stores, key-value tables, vector and/or array structures, hash tables, databases, key/value data stores, and so on. As an example, cell identifiers or pixel IDs can be used to retrieve spatial locations.

“Still referring at FIG. “Still referring to FIG. 13: Processing unit 1316 or forward-facing camera 13304 may divide digitalscreen 1308 into multiple sections or cells for purposes of mapping screen locations and digital locations using screenlocation to digital location map 1320, and/or generation alerts as discussed in more detail below. Processing unit 1316, forward-facing camera 1304 and/or 1308 can divide digital screen 1308 into multiple cells or bins. These may represent regions bordering each other and may be rectangular, hexagonal or any other type of tessellation; each cell and/orbin may correspond to an identifier from a plurality cell identifiers. For example, the coordinate of a modified pixels can be used to indicate its location. Alternately, or in addition, a cell or bin identifier may also be used to indicate whether a particular pixel is located and/or a plurality.

“Alternatively, or additionally, and continuing with reference to FIG. 13. Division of digital screen 1308 could include identification of one or several regions of interest within digital screen, and division of the digital screen into regions. A central section of digital screen 1308 may include the identification of one or more regions of interest within digital screen and division of digital screen into regions of interest. A further, non-limiting example is that one or more areas of secondary significance may be identified. For instance, the central section or portion of digital screen 1308, which may include one or more regions, may correspond to lanes next to the vehicle. Regions of secondary importance might have a higher threshold to trigger an alert, a lower or different degree of urgency, escalation, or triggering criteria, than regions of high importance. One or more areas of low importance can be identified. This could include, but is not limited to, the regions on the right and left of the digital screen. These regions may correspond with objects along the roadside, such as trees, buildings and pedestrians on sidewalks. Regions of low importance might have a higher threshold to trigger an alert, a lower or different degree of urgency,/or escalate,/or different triggering/or escalation criteria. Alerts may not be generated prior to object classification. After reviewing this disclosure in its entirety, persons skilled in the art will realize that there are multiple importance levels for different regions or sections of digital screen 1308, and regions with any particular level of importance may only be one region or multiple areas. Additionally, any location in digitalscreen 1308 may correspond to a certain degree of importance for alert and detection in such regions.

“With reference to FIG. “With continued reference to FIG. Alternately, forward-facing camera 1304 and processing unit 1316 may be configured to detect one or more features within the field of vision. This will allow the division of the digital screen 1308 according to the identified feature. A feature could include lanes, divisions among lanes, sides or right-of-way on the vehicle, or any other type of information.

“Still referring back to FIG. 13. Processing unit 1316 or forward-facing cam 1304 can be used to calibrate processing device 1316 or forward-facing cam 1304 to a vehicle position to determine the relative positioning and orientation of forward-facing 1304 relative to the road. Calibration can be done in conjunction with, or during, any of the method steps and/or processes described in this disclosure. Calibration can be done with respect to a “vanishing point”, as described in more detail below. Alternately, or additionally, calibration can be performed using other features in the field of vision than a “vanishing point”.

“With reference to FIG. 13. Processing unit 1316 and/or forward facing camera 1304 can attempt to determine the vanishing point of a road to be used as a reference. A ‘vanishing point’ is defined as the following: A?vanishing point? is a place in a camera image where all parallel lines seem to converge. VP can be used to perform perspective transformations on the road. A VP may also be used to define a horizon. This allows for image processing to eliminate the sky. One or more types of calculation can be used to calculate VP. Edge detection is one example of a method that can be used to calculate VP. Edge detection may try to exploit high-contrast edges in an image. Such methods can be predicated upon the presence of predictable straight elements such as telephone lines or lane markings. Another approach is to use texture gradients and sliding windows in an image. Another approach is to use Haar features in an imaging to locate a road. This method could be very similar to popular face detection methods. Texture-based methods can produce better results than simple edges methods in some instances, but they may be more cumbersome.

“Still referring back to FIG. 13 is a middle ground between texture-based and edge-based methods. This may be achieved by using a region based algorithm with road region models. The method may use a triangle or trapezoidal model to show the expected shape and location of roads. A trapezoidal or triangle model can be used to calculate the average RBG pixels value of the road. From this RBG value, a customized saliency mapping is created for an image such as one captured from a video feed. This converts colors to 0. The remainder of the image can be scaled to 0 to255, depending on the euclidean distance from the average road color. This creates a grayscale saliency picture. After normalization, the road may appear black. The grayscale saliency image may be binarized in a subsequent step. The forward-facing camera 1316 and/or processing unit 1316 may use an Otsu threshold for the saliency images as well as a K-means clustering. Where k=4, the Otsu threshold can produce a binary image while the k?means algorithm might produce a segmented picture of four regions. The image can be binarized by setting the remainder of the image to 1. A logical “and” is used to create an image for the next step. A logical?and? operation can be used to produce an image for a subsequent step. Otsu may be a quick and liberal method to determine a road area. A k-means approach might be more conservative and slow. The segmentation is computationally intensive because the k?means algorithm is the most complex. You can alter the number of iterations that the algorithm runs to adjust the speed of the k?means computation. Experimentally, five iterations yielded good results. It was also determined that two iterations are possible if speed or computational efficiency are desired. However, quality can be maintained while speed and efficiency can be increased by two iterations. Processing unit 1316 and/or forward facing camera 1304 might not be able to perform a k?means algorithm. Alternatively, the Otsu method could work but may cause overshooting.

Referring to FIG. 14 illustrates an example sequence of images that is generated during road segmentation. A normalized image (b) can be created from an original image (a). A binarized image may be created by applying Otsu?s threshold algorithm. A modified binarized picture may be created by applying k-means segmentation on a binarized photo (c). The image may be further modified by an inverted sup Otsu or k-means method. A further modification to the image (f) may be possible through morphological operations. You may extract significant counters (g). You can overlay the results of (a-(g) on the original image (h).

“In an embodiment and still referring back to FIG. 14), a method of VP detection could include Hough transforms along the edges of a road segment, as well as calculating texture grades. FIG. FIG. 15 illustrates an alternative approach. A road segment algorithm creates a triangular shape towards vanishing points. An x-axis coordinate for the VP can be defined as the column position of an image that contains the most pixels representing the road. The y-axis coordinate for the VP could be the position of the first road-representing pixels starting at the top-quarter of the image. Since the VP is unlikely to change significantly over the course of a trip a calibration phase may be used to determine the average position of avanishing point. An embodiment may use a sense of the camera’s position relative to the road to aid in the detection of lane and concern objects.

Referring to FIG. 13. Forward-facing camera 1304 or processing unit 1316 can identify one or more areas of interest, as discussed above to isolate content immediately ahead of the car and peripheral information in adjacent lanes. A smaller ROI can allow for faster processing of collision detection. Forward-facing camera 1304 or processing unit 1316 can identify regions of interest by using one or more objects in the video feed. For example, forward-facing camera 1304 may determine the location of lane markings along the road. This may require a multi-step process, such as estimating the area of the road, estimating the vanishing point of camera, and/or perspective transformations.

“Still referring back to FIG. 13 lane detection can be done using any method. This includes identifying lanes on a road using one or more preprocessing steps and then performing a perspective transformation to create an aerial view. Further detection can be made with the perspective shift in an embodiment. Color segmentation may be used to identify yellow and white lanes. A Canny threshold may also be used to create a detailed edge map. The Canny edge map might capture more detail than is necessary. To remove clutter, potholes and other details, anything above the VP may be deleted. Any information in the road area that was segmented according to the above can also be deleted. Because the color of well-painted lanes can be different enough from that of the road, it may be possible to preserve them. A Canny threshold may capture only the edges of lanes in an embodiment. An optional color segmentation scheme can be used to improve robustness. The image is converted into an HSV color space, and pixels in a yellow or white range are extracted. These color segments can be added to an image, such as using a logic?or? operator.”

“Continued reference to FIG. 13 may allow for a perspective transformation in relation to the VP, creating. This could create an aerial view of the image. An embodiment may allow for an aerial view to be processed using conventional image processing methods. A projection histogram of the x-axis may be made from a number of pixels representing each lane in the transformed view. Local maxima may also be included in this histogram. To remove local maxima that are too close together, a hard-coded constant may be used. This may help to eliminate multiple lanes in the same lane. The position of a local minimum can be saved and added into a rolling average for the lane’s position. The area surrounding the local maxima of the perspective image can be removed. All pixels in the lane may be captured and stored in the restricted queue data structure. This may pop an element from the queue’s front once it exceeds a predefined limit. In one embodiment, the limited queue may serve as a way to preserve information across multiple frames.

“Still referring back to FIG. 13 may be used to collect data for a least meansquares algorithm. To approximate lanes, the least mean squares algorithm can be used with one-degree polynomials and/or line of best fitting methodology. Alternately, or in addition, second-degree estimations can be used. Second degree estimations may capture curvature within lanes that the first degree cannot. However, it may take slightly longer to compute and/or be more computationally costly to compute. The first degree polynomials had a greater influence on the results than noise.

Referring to FIG. 16 illustrates an example of a process to find lanes. A modified image (a), may be used to create a Canny Edge image (b). This image may include portions, such as the VP or the like, that have been removed in order to create a masked Canny Edge image (c). You can also create a perspective image showing Canny lanes (d), or a perspective view of the color threshold image (e). Combining the results of (d), (e), may produce an image of color threshold images (e), and edge lanes (f). On lanes, a second-order best fitting lines process can be used as shown in (g). The (h) illustration shows the transformed computed lanes derived from the best fit lines in (g). An embodiment of the lane detection method described above may be capable to find multiple lanes. Ian embodiment may limit detection to two lanes closest the VP and on opposite sides. This would limit lane detection to those lanes most relevant for detection of potential hazards and generation directional alerts.

“Referring to FIG. “Referring again to FIG. 13, lane identification can be used for determining one or more areas. An area of 1308 that covers a lane occupied a vehicle with forward-facing camera 1304, may be considered a region of high significance. This is because alerts are more likely to be triggered quickly and/or with greater urgency. A further, non-limiting example is that one or more areas of secondary significance may be identified. For instance, an area of digital screen 1308 covering a vehicle with forward-facing camera 1304, for instance, may be identified as a region of high priority. Such an area may include a substantially centrally located trapezoid on the digital screen. Regions of secondary importance might have a higher threshold to trigger an alert, a lower or different degree of urgency or escalation and/or different trigger and/or esca/or triggering and/or other esca/or esca. Another non-limiting example is that one or more areas of low importance can be identified. This could include, without limitation, the regions to the right or left of the digital display screen. These regions may correspond to objects such as pedestrians on sidewalks or trees along the roadside. Regions of low importance might have a higher threshold to trigger an alert, a lower or different level of urgency and/or escalate, or different triggering//or escalation criteria.

“Referring to FIG. 13. System 1300 contains a motion detection analyser 1324 that is connected to the processing unit 1316. Any component or hardware module, and/or software module, may be included in the motion detection analyzer 1324. The motion detection analyzer 1324 can be configured to detect a rapid parameter switch on the digital display 1308, determine a screen position on the digital display 1308 of this rapid parameter change and retrieve from the screen to spatial location map 1320 a spatial location based upon the screen location. Finally, it generates the directional alert using the spatial location.

“With reference to FIG. 13. System 1300 can include any other element as described in this disclosure, whether it is included in any system or used in any manner. System 1300 could include one or more biosensors 1328 s. This may include any of the biosensors 1328 s described in FIGS. 1-12, which includes without limitation GSR and HRV, as well as other sensors. System 1300 could include at most an audio input device 1332. This may include any audio input devices 1332 as discussed above with reference to FIGS. 1-12, which includes microphones. The system may include at most a user-facing cam 1336. This could include any camera 1336 described above with reference to FIGS. 1-12; user-facing cam 1336 could include a camera mounted on a mobile device, such as a smartphone and/or cellphone, or even a?selfiecam.

“Referring to FIG. 17 illustrates an example of how artificial intelligence can be used to monitor, correct and evaluate user attentiveness. Step 1705 is when a motion detector analyzer 1324 operates on a processing device 1316 and captures, using forward-facing cameras 1304 and 1308, a video feed of the field of vision on a screen 1308. This may be implemented as described in FIGS. 1-13.”

“At step 1710 and continuing to refer FIG. 17 motion detection analyzer 1324 detects a rapid parameter shift on the digital screen 1308, and continues to refer to FIG. Rapid parameter refers to a change in one or more pixels which exceeds a threshold number of pixels experiencing the change per framerate and/or time. A non-limiting example of how to detect a rapid parameter change is to compare a first frame and a second frame in a video feed. This will allow you to determine if a threshold number has changed relative to at least one parameter between the first and the second frames. The first and second frames can be consecutive and/or separated by one or several intermediate frames. The frequency at which the motion detection analyzer 1324 samples frames can be used to determine likely degree of motion change may be chosen to capture any changes that a user might need to respond. For example, a sample rate could be used to collect frames enough often to detect motions of pedestrians, cyclists, vehicles, and animals. A machine-learning process may determine the frame rate. For example, if object analysis or classification has been done to identify objects in similar video streams, then motion and rates of pixel parameters changes in these video feeds can be correlated. This training data may be used to identify rates for pixel parameter change that are consistent with movement of classified objects. These rates can be used to determine a frame rate for the motion detection analyzer 1324. The rate of change consistent in object motion can also be used to determine a threshold level of pixel changes. For example, a threshold number pixels that have changed parameters. This may be used to detect rapid changes as described above. An embodiment of detection of rapid changes may be analogous to human perception of movement or light change in peripheral vision. This is enough for the human eye to scan in the direction of perceived change. Threshold levels may be used, for example, using machine-learning or deep learning and/or neural net process as described above. They can prevent small fluctuations of light or color from triggering alerts, as discussed in more detail below. However, fluctuations consistent with possible movement objects of concern could be detected and used as directional alerts.

“Still referring back to FIG. 17 Parameter changing in rapid parameter changes may include parameters that pixel might possess. The parameters to track and the changes to those parameters may be detected using machine-learning processes as described above to detect correlations between object motion and parameter changes. A parameter can include at least one color value. A parameter can also include an intensity value. This is a non-limiting example. A minimum of one parameter can include multiple parameters, including without limitation, a linear or any other combination derived using machine learning, deep learning and/or neural network processes, as well as the combination of several parameters.

“With reference to FIG. 17 parameters may be detected and/or compared to detect rapid parameter change. This could include parameters that describe multiple pixels, such parameters of geometric features on digital screens. Processing unit 1316 might use feature detection to detect rapid parameter changes. A static camera’s features may be able to move along epipolar lines, which intersect at the epipolar center. Processing unit 1316 can determine whether a shape having a feature-set is moving in excess of a threshold amount and/or in a direction that corresponds to intersection of an object with the vehicle or a vehicle’s path. The term “collision detection” may refer to the modification of a digital screen 1308 that corresponds to an object intersecting with vehicle or a vehicle’s path. Collision detection is a two-dimensional change on digital screen 1308 that corresponds with conditions for the generation of a direction alert. A motion vector may be used to track translational motion of a feature identified shape. It may contain an n-tuple number of values and may be stored in any data structure or data representation. This allows tracking motion of digital screen 1308. A resizing vector may also be stored in any data structure and/or representation. This allows for tracking the size change of a shape in digital screen 1308. Frame-to-frame comparisons and/or comparisons to thresholds may be used.

“In an embodiment and with continued reference at FIG. 17 may use a motion vector or resizing vector to calculate the time to collision TTC for an object based upon a parameter change. A number of parameter changes can be used to calculate numbers that indicate degrees of change. Processing unit 1316 might then weigh features with scores that correspond to the calculated numbers. A region of interest, or a region that contains matching features, can be broken down into multiple squares. The median score of these features will then be used to determine the score for the area. FIG. FIG. 6 shows a flow diagram that illustrates an exemplary embodiment for a process flow to match feature and detect parameter changes.

“Still referring back to FIG. 17 may still be referred to as FIG. You can use any feature matcher and extractor. A Binary Robust Invariant Scalable Keypoints (BRISK), detector and matcher can be used. This algorithm may require inspection of a predetermined number of feature sets, ns. An embodiment may give each feature a weighted score based on its motion along the digital screen 1308. This could include their position relative to a top of the screen and their magnitude. One or more of the geometric models illustrated in FIGS. may be used to determine features’ weighted scores. 19A-B, both models can be expressed using the following equation:

“W i = ? f i , t , f i , t – 1 ? D m * ( sin ? ( ? ) + L? ( f.i, t)? * 1 2nWhere fi.t and fi.t-1 are coordinates of matched feature in subsequent frames, Dm represents the maximum distance feature can be separated in an image. The directional angle of vector between fi.t and fi.t-1 is called L( ). This score can be normalized depending on how close the feature is to the bottom. There are many ways to implement L( ). One way was to use an exponential function for the feature’s y-axis coordinate. A linear function can be used in the mobile app for faster computation. How are the two models different? How???? is calculated. FIG. 19A calculates? relative to the horizontal. The closer the sin( ), function should be to one, the more vertically arranged the vectors between the features. The concept suggests that features moving vertically down a screen might be given more weight. This model might also prioritize features that move towards the bottom of the screen. This is indicative of a location close to the driver’s chair. If you only look for vertical objects, it may be difficult to see adjacent objects that are often nearby. These objects are frequently and often innocuous. FIG. 19B. Items are moved directly into the cars lanes and given heavier weights. This is the new? This new? can be calculated by finding the vector between fi.t-1 and Cb, the center bottom point. You can find the perpendicular vector for fi,t-1Cb. The perpendicular vector of fi,t-1Cb may be found;? All angles above the horizontal are exempted in the first method. The latter method may exclude angles that extend beyond the perpendicular vector.

Referring to FIG. 20. A two-dimensional Gaussian kernel can be placed at fi,t in each pair of matched feature pairs. The kernel can be initially composed of values between 0 to 255. The kernel may be divided by the number of feature sets used, and multiplied with the weight computed using FIGS. 19A-B above. This equation may be used to express it:

Summary for “Methods, systems, and methods for using artificial intelligence in order to monitor, correct, or evaluate user attentiveness”

After a decade of steady but slow declines, car accidents are rising in the US. Safety cars and better driving aid equipment can help reduce accidents but distracted driving is more costly than all of these benefits. It seems that state bans on cell phone use in cars do not work. Distractions such as distracted calls and the use of mobile apps can be easily circumvented by mobile apps. The current solutions assess vehicle dynamics and monitor driving behavior. Driving risk is directly related to speed, braking and cornering. However, it does not take into account weather conditions or attention given by drivers to road conditions. It also takes into consideration traffic conditions and driver’s ability to manage the vehicle in unplanned situations. Driving assist solutions tend to ignore fatigue, stress, well-being, and ability to react in an emergency.

“A system that uses artificial intelligence to monitor, correct and evaluate user attentiveness is described in an aspect. The forward-facing camera can capture a video feed from a field of vision on a screen. A user alert mechanism is included in the system that can send a directional warning to a user. The system also includes a processing device in communication with the forward facing camera and at least one user alert mechanism. The screen location to spatial map is part of the system. The system also includes a motion detector analyzer that is integrated into the processing unit. It can detect a rapid parameter shift on the screen and determine its location on the screen.

“Another aspect is a method that uses artificial intelligence to monitor, correct and evaluate user attentiveness. A motion detection analyzer is used to capture a video feed from a field of view on a digital monitor. The motion detection analyzer and the digital screen can detect a rapid parameter shift. The motion detection analyzer can determine the screen location of the rapid parameter changes on the digital screen. The motion detection analyzer retrieves a spatial location from the screen location and creates a map based on that screen location. The motion detection analyzer generates a directional warning by using the spatial location. The method involves generating a directional alert using the motion detector analyzer.

“These and other features of non-limiting embodiments will be apparent to those skilled with the art upon reviewing the following description of specific embodiments of this invention in conjunction the accompanying drawings.”

“Embodiments described herein include an intelligent driver attention monitoring system. These systems may imitate the behavior of a reliable passenger, who can assess the driving context risk (in relation to current speed, acceleration and breaking, cornering, weather, traffic conditions, etc.) and then compare it with the driver’s attention level. Virtual passengers may alert drivers if they look away from the road too much or too often or if the car is zigzagging along the lane. For example, Embodiments can detect motion in video feeds. This may be used to alert the driver to sudden changes in motion, in a way analogous to or supplementing peripheral vision for distracted or vision impaired users.

“Over the past years, basic telematics have been introduced in order to encourage safe driving through Usage Based Insurance (UBI). The disclosure describes embodiments that may be an evolution of UBI Telematics Systems. They combine analytics of telematics data, driver behavior, and performance to calculate driving risk scores. The present invention provides personalized and real-time feedback to drivers to help prevent distractions from causing dangerous situations.

“Systems described herein can evaluate various factors, including but not limited to: (a) the attentiveness of people while performing tasks or communicating with another person such as a person/machine; (b) an estimate level of risk associated in a surrounding environment; or (c) a margin of attention between a level that is available and the level required for the task or communication. In some systems described herein, such evaluation can be used to provide feedback and/or generate useful feedback about the behavior of the person being observed. An evaluation may be used to provide suggestions to the person being observed to help them change their behavior to minimize or reduce risk. Artificial Intelligence (AI), which is a form of artificial intelligence, can be used to transform observed patterns into behavior profiles. It can also refine them over multiple observations and/or create group statistics for similar situations. One example of the applications of the methods described herein is driving risk profiling, and prevention of accidents due to distracted and drowsy drivers. These methods can perform driving risk profiling or prevention of accidents using machine Vision and/or AI to create digital assistants with copilot expertise. This disclosure may benefit all drivers, not just teenage drivers or elderly drivers. This invention may be of benefit to fleet management companies, car insurance providers, ride-sharing and rental car companies, as well as healthcare providers, to enhance, tune, and/or personalize services.

“Embodiments may be used to monitor driver attention and/or smart driver monitoring in order to address increasing problems of unsafe driving. They can cover a range of attention issues, from distracted driving to feeling drowsy on boring, long stretches of road. The apps and mobile devices are not designed to prevent distractions. They can ignore any stress levels that drivers may be experiencing and require full attention at split-second notice. Drivers who have sophisticated driver assistance may also be less likely to succumb to drowsiness, which is another major cause of fatal accidents.

“Embodiments described herein may also provide electronic driving record (EDR), implementation. Current monitoring solutions don’t support secure data access rights. They lack configuration flexibility and real-time feedback mechanisms. EDR is a method of allowing drivers to restore access rights dynamically during driving, or at the end a trip. This allows them to identify and/or specifically who has access to what EDR data, and/or when, and how to create precise driving behavior models. Health-related measures and log attention levels of the driver. Contrary to the current system embodiments, these may address privacy concerns associated with UBI data collection.

“Now, referring to FIG. FIG. 1 shows an illustration 100 that illustrates driver performance and emotional arousal. The illustration shows that driving performance is at its?normal? peak. Conditions, which are represented by a’safe zone? The central part of the illustration. The illustration shows that fatigue and drowsiness can lead to decreased performance and inability to respond to difficult situations. Excessive excitement, anxiety and fear may also lead to a reduced ability to drive correctly. What is a ‘driving risk’? A horizontal line that runs from the ‘too relaxed to the?too dangerous? Label to the ‘too excited’? Label, moves up or down very quickly sometimes, changing the attention’margin? Driver. Systems disclosed in this disclosure can continuously estimate the driver’s attention margin. In some embodiments, system may provide feedback for the driver to help him/her adjust behavior and avoid dangerous situations. Graphical illustration 100 may illustrate a Yerkes?Dodson law. This law is used by psychologists to link performance and arousal. According to this exemplary illustration humans are expected to drive more effectively when they are in the?Safe Zone’, which is free from distractions and excessive excitements.

Referring to FIG. Referring to FIG. 2, a process flow chart 201 illustrates a number of input and/or analyses steps 203-209 that can be performed by an AI 211. This includes without limitation an AI integrated in systems as discussed in further detail. There is also at least one output step 215 that an AI 211 might perform as described below. For illustration purposes only, AI 211 embodiments may act in a way analogous to a smart passenger or copilot that alerts a driver of hazards or other items of concern or concern on a road ahead. AI may be able to receive inputs from outside conditions 205 that indicate one or more conditions and phenomena outside of a vehicle being operated by a driver. This could include weather conditions, road conditions, weather conditions, behavior of other drivers and pedestrians, bicyclists and/or animals on the road or in any other area through which the vehicle or driver navigates. AI could also collect data analogous to “watching the scene”. Around and/or in front a vehicle/driver. A system or device implementing AI, an embodiment of AI 211, and/or a system or device implementing AI can perform one or more assessments to assess a level of risk 209 as a function outside condition inputs 205. For instance, using processes for risk assessment as described elsewhere in the disclosure. AI 211 can also receive driver-related information 203 by using one or more sensors, cameras, or other means that detect information about a driver. For example, an AI 211 receiving driver related inputs 203 could be characterized to use such inputs to monitor the driver. AI 211 can perform one or more analyses using driver-related inputs 203 in order to determine one or several facts about a driver?s current or future performance. For example, AI could determine the driver’s attention level 209 by using such inputs. AI 211 can combine the inputs and/or analysis results above with one or several elements of stored information. For example, AI 211 could determine the driver’s attention level 209 using driver-related inputs 203. AI 211 can use inputs, analysis, 203-209, and/or stored data to generate one or several outputs for driver. For instance, AI 211 could interact with driver 215, to inform him/her of the results of input/or analytical processes 203-209, or processes using and/or comparing stored info in such inputs/analysis 203-209. AI 211 can use or combine with machine-learning processes to adjust monitoring and reasoning to driver’s habits and preferences.

“Referring to FIG. 3. An AI 211, as described in FIG. 2. may be used to make one or more decisions regarding a driver 305 in a system 307 as a result of input and/or device 308-313; AI 211 can be applied on a processing unit 315. Any computing device described in this disclosure may be included in processing unit 315. Processing unit 315 can be any combination or combination of computing devices described in this disclosure. Processing unit 315 can be connected to the network described in this disclosure. The network could be the Internet. Processing unit 315 could include, for example, a first or clustered of servers at a first location, and a second or clustered of servers at a second location. Processing unit 315 could include computing devices that are devoted to specific tasks. For example, one computing device, or cluster of computers, may be used to operate the queues as described below. A separate computing device, or cluster, may be used to store and/or produce dynamic data, as explained in more detail below. Processing unit 315 could include one or more computing units that are dedicated to data storage, security and traffic distribution for load balancing. Processing unit 315 can distribute one or more computing tasks, as described below, across a plurality computing devices of processing device 315. These devices may work in parallel, in series or redundantly or in any other way used to share tasks or memory among computing devices. A?shared nothing’ implementation of processing unit 315 is possible. Architecture in which data is stored at worker; this may allow for scaling of system 100 or processing unit 315. Processing unit 315 could include a mobile and/or portable computing device, such as a smartphone, tablet or laptop. However, processing unit 315 can also include a mounted computing device on or inside a vehicle.

“With reference to FIG. “With continued reference to FIG. Processing unit 315 and any device that is used as processing unit 315, as described in this disclosure may be set up to repeat a single step or sequence until the desired outcome is achieved. Iterative repetitions may be used to reduce or decrement one or more variables and/or to divide a larger task into smaller, iteratively addressed tasks. Processing unit 315 and any device that is described as processing unit 315 may perform any step, sequence, or task as described in the disclosure. This includes performing multiple steps simultaneously using parallel threads, processor cores or other resources. Task division between parallel threads or processes can be done according to any protocol that allows for the division of tasks among iterations. After reading the entire disclosure, persons skilled in the art will know the various ways steps, sequences, processing tasks and/or data can be divided, shared or dealt with using iteration and/or parallel processing.

“Still referring back to FIG. “Still referring to FIG. Camera 308 can include any device that is capable of taking optical images of drivers using light on and off the visible spectrum. Processing unit 315 can receive and/or classify data about driver 305 via camera 308, including data describing driver305 and data describing the orientation of the driver’s eyes and/or face. To analyze driver’s attention, direction may be recorded and/or classified according to rotation angles (yaw/pitch, roll, and eyes lateral moves) 309 (road ahead; left mirror, right rearview mirror; instrument cluster, passenger seat; center dash; phone in hand); These data can be used to model driver behavior 310 using AI and/or machine/learning techniques as described in this disclosure. Data from vehicle dynamics, including acceleration, speed and rotations per minute (RPM), may be used to evaluate 311 vehicles. These data may be collected from any number of sensors. Processing unit 315 may be communicated with sensors using any wired or wireless protocol. For example, processing unit 315 may have one or more sensors embedded in it or in an associated smartphone. The sensors may include at least one road-facing camera that can detect objects and provide distance evaluation capabilities. Vehicle buses, such as OBDII or CAN buses, may also receive and/or collect vehicle dynamics data. Processing unit 315 or an AI implemented thereon can receive dynamic trip information 312, including without limitation traffic, weather, and other information via a network like the Internet.

“With reference to FIG. “With continued reference to FIG. Processing unit 315 can perform any of the following steps: classification, analysis, and/or processing methods. A machine learning process automates the use of a set of data called?training data? A?training data set and/or a training data body? To generate an algorithm that is performed by a computing device/module in order to produce outputs from input data. This is different to non-machine learning programs where commands are written in programming languages and determined by the user in advance.

“Continued reference to FIG. “3 Training data is data that contains correlations that a machine learning process can use to model relationships among two or more types of data elements. Training data can include multiple data entries. Each entry may represent a set data elements that were received, recorded, and/or created together. Data elements may be correlated through shared existence, proximity, or other means. Multiple data entries may show one or more trends between data elements. For example, a higher number of first-class data elements belonging to a particular category of data elements may be more likely to correspond to a higher number of second-class data elements belonging to a different category of data. This could indicate a mathematical relationship or proportion between the two categories. Multiple data elements could be linked in training data according different correlations. Correlations can indicate causal and/or predictive connections between data elements. These relationships may be modelled as relationships, such as mathematical relationships using machine-learning processes. Formatting and/or organizing training data can be done by using data elements. For example, data elements may be associated with descriptors that correspond to data elements. Training data can include data that is entered on standard forms by people or processes. For example, a data element may be assigned to a descriptor of a category. Training data can be linked to descriptors for categories using tags, tokens or other data elements. For instance, training data could be provided in fixed length formats that link data positions to categories like comma separated value (CSV) and/or self-describing formats like XML, enabling devices or processes to detect data categories.

“Alternatively, or additionally, but still referring to FIG. 3. Training data could include one or more elements not classified; that is, data may not have descriptors or be formatted in a way that allows them to be categorized. Machine-learning algorithms or other processes can sort training data according one or more categorizations. This could include tokenization, detection and correlation of raw data values, and natural language processing algorithms. A corpus of text may contain phrases that make up a number?n, for example. A statistically significant number of ngrams containing compound words (e.g. nouns modified or added by other nouns) may allow for identification of these ngrams. Such an ngram could be classified as an element of language, such as a “word?”. can be tracked in a similar way to single words and generate a new category through statistical analysis. In a data entry that includes textual data, a person’s identity can be determined by referring to a dictionary, list, or other compendium. This allows for ad-hoc categorization using machine-learning algorithms and/or automated association data in the entry with descriptors. Automated categorization of data entries may allow the same training data for multiple machine-learning algorithms. This is further explained below. Processing unit 315 can use training data to correlate any input data with any output data. Training data used by processing unit 315 may correlate any input data as described in this disclosure to any output data as described in this disclosure. This is an example that’s not limited to. One example is that a person might observe a number of drivers performing vehicular maneuvers, such as driving on a training track and/or on public streets. This data may then be combined with elements of driver-related data and/or vehicular dynamic data by a computing device like processing unit 315 to create one or several data entries of training data for machine-learning. Training data could also include sensor data that is recorded during, before and/or following an accident. This data can be combined with information about the circumstances of the accident (e.g., driver-related and/or vehicular dynamics data) to create one or more data entries of training data to be used in machine-learning processes. Training data could also be used to correlate data about conditions outside a vehicle such as road conditions, behavior and bicyclist behavior, as well as information regarding risk levels. Such data can be collected using systems such as the ones described herein. For example, data recorded during, before and after collisions or other accidents. This disclosure also contains other examples of training and/or correlations. Those skilled in the art will find many examples of training data that can be used in conjunction with the present disclosure.

“I am still referring to FIG. 3. Processing unit 315 can be configured and designed to create a machine learning model using techniques for developing linear regression models. Ordinary least squares regression is a type of linear regression model that aims to minimize squares between actual and predicted outcomes. This can be done according to a norm for measuring such differences (e.g. A vector-space distance norm; the coefficients of the resulting equation can be modified to improve minimization. Linear regression models can also include ridge regression methods. In these cases, the function to minimize is the least-squares function plus term multiplying each coefficient’s square by a scalar amount. This penalizes large coefficients. Linear regression models can also include least absolute shrinkage operator (LASSO), in which ridge is combined with multiplying each coefficient by a factor of 1 divided multiplied by twice the number of samples. Multi-task lasso models can be included in linear regression models. The Frobenius norm that is applied to the least-squares terms of the lasso modeling is the square root (sum of all terms) of the norm. Linear regression models can include an elastic net model or multi-task elasticnet model. A polynomial regression model is a linear regression model that uses a polynomial equation. A quadratic, cubic, or higher-order equation is required to provide the best output/actual output fit. Similar methods may be used to minimize error functions as described in this disclosure.

“Still referring back to FIG. 3. Models may be created using other or additional artificial intelligence methods. The process of “training” may create connections between nodes. The process of?training? may create connections between nodes. This involves elements from a training data set being applied to the input nodes. A suitable training algorithm, such as Levenberg Marquardt or conjugate gradient, simulated Annealing, or any other algorithms, is used to adjust the weights and connections between nodes in adjacent layers to produce the desired output values. Deep learning is also known as this process. This network can be trained with training data.

“Still referring a FIG. 3. Machine-learning algorithms can include supervised machine learning algorithms. The term “supervised machine-learning algorithms” is used herein to refer to algorithms that receive a set of inputs and outputs and attempt to determine if one or more mathematical relationships relate inputs and outputs. Each of these mathematical relations must be optimal according to some criteria specified by the algorithm via some scoring function. A supervised learning algorithm might include sensor data or data generated via analysis, as inputs, degrees risk and/or drivers inattentiveness, as outputs. The scoring function represents a desired relationship between inputs. For example, scoring function could seek to maximize the likelihood that a particular input and/or combination thereof is associated with an output to minimize the chance that a certain input is not associated. A scoring function can be described as a risk function, which represents an expected loss. An algorithm that relates inputs and outputs. Loss is calculated as an error function. This is a measure of how incorrect a prediction is when it is compared with a given input/output pair in training data. After reading the entire disclosure, persons skilled in the art will be able to identify various supervised machine-learning algorithms that could be used to determine the relation between inputs/outputs.

“With reference to FIG. “With continued reference to FIG. System 307 or processing unit 315 can determine that a driver is at risk based on inputs. Inattentiveness could be expressed as a numerical quantity, such as a score. A threshold value may be used to compare the score. An alert may be generated by processing unit 315 and system 307, depending on whether it exceeds (or alternately, or additionally falls below) the threshold level. This disclosure provides several examples of how alerts can be generated and/or the forms of alert output. System 307 and/or 315 may send an alert to an inattentive motorist using sounds or voice prompts. These are selected and paced according to the level of urgency. Non-limiting examples include system 307/or processing device 315 determining that the driver is asleep. System 307/or processing facility 315 can warn the driver by using verbal interaction 327, and/or provide mind-energizing short dialogs 322 to stimulate attention. To determine the length and depth of the conversation with the driver, system 307 and/or processing unit 315 may track the responsiveness of the user 325. This information can be used to add and/or update a machine-learning model and/or process to determine a new level of risk, attentiveness, or any other output. To communicate verbally with the driver, a microphone (optionally an array of microphones) 329 or a speaker (optionally a wireless speakersphone) 328 may be used. The driver may have biosensors 314 that monitor heart rate and galvanic response. Data may wirelessly be transmitted to a stress/fatigue tracking device 316 or algorithm within the system to provide additional driver information 317. This data can then be transferred to processing unit 315. An embodiment may include biosensors that monitor heartbeat rate, blood pressure, and galvanic skin reaction, as well as sensors monitoring parameters related to breathing. This can be used to increase accuracy in evaluating body fitness conditions like fatigue, stress, drowsiness and distraction.

Referring to FIG. 4. An illustration of an exemplary portion of a vehicle 402. Including a camera module 403 is shown. A driver-facing camera 4031 may be included in a unit 403. A driver-facing camera 4031 can be mounted to any vehicle component and/or structural element. For example, a driver-facing camera 4031 could be mounted near a rearview mirror. A unit 403 can include a processor 433. This may include any device that is suitable for use as processing unit 315, or any device that could be in communication with it. The processor 4033 may also include any other device. A forward-facing cam 4035 can be included in unit 403, which may be placed together with driver-facing cam 4031, or separately. In an embodiment, either driver-facing cameras 4031 and 4035 may be connected directly to the processor 4033, or both may be connected and/or in communication to the same processor 4033. The forward-facing camera can be attached or mounted to any vehicle structure. For example, the forward-facing camera 4035 could be attached to a windshield next to a rearview camera’s mount. Wireless connectivity can allow data transfer between unit 403, cameras 431, 435, and/or processors 4033, as well as between a processing unit 315, such as a smartphone, and unit 315. To provide the best view of a driver’s face and minimize interference with the road view, unit 403 can be attached to the rearview reflector (attached to windshield/body of rearview mirror). Unit 403 can contain a road-facing camera 435, a driver’s-facing camera 431, and a processing unit 433 to process the video streams and communicate 405 wirelessly or via USB with a mobile app on a 409 or another processing device.

Referring to FIG. Referring now to FIG. 5, a flow-process diagram shows how an attention monitoring device 502, which could be integrated in and/or in communications with system 307 as discussed above, may perform analysis using data extracted from the driver-facing cam. These data could include, but are not limited to, facial contours. For instance, processor 4033 or system 502 and/or processing device 315 may be able to identify eyes, nose and/or mouth in order to assess yaw pitch, roll, direction of eye gaze, eye lid closing patterns, and/or evaluate eye gaze direction, eye gaze direction, eye gaze directions, and/or eye gaz direction. A neural network may be used, in an example but not limited to, to extract parameters and determine distraction or drowsiness. Attention monitoring system 502 could detect the face and/or hands of a driver. System 502 may identify facial landmarks 505 and special regions 505 including without limitation eyes, nose and mouth. This information can be used to estimate head posture and eye gaze direction 506 and provide information about hands gripping the steering wheel. System 502 could interpret situations where the driver’s eyes and head are directed away, as a sign of distraction 507. System 502 can monitor a driver’s attention level 513 against a personalized behaviour model 515. Personalized behavior model 515 may also be generated using machine learning and/or neural net processes, such as using user data from system 502 or system 307 as training data. Alternately, or in addition, System 502 can compare attention levels to permissible thresholds. These thresholds may correspond to duration, frequency, or other patterns. 519 Warning alerts 500 may also be sent to the driver if a safety margin calculated from such models and/or threshold comparies is not adequate. If a driver is not distracted, but does show signs of drowsiness 509 then system 502 may begin evaluating driver attention 513 against user behavior models 515 and safety margins using the same flow as distracted driving monitoring. If the driver is not distracted 507 or drowsy 511, the system can continue to monitor the driver’s face and hands 503, 504, and perform the above steps iteratively.

Referring to FIG. “Referring now to FIG. 6, an illustration showing the yaw and pitch parameters used to measure space-rotation of a driver?s face is shown. Image 602 illustrates three parameters that are used to classify the orientation of a driver?s face. Yaw is defined as horizontal movement (left-to-right) of a driver?s face. Pitch is defined as vertical movement (up and down), such is an axis around which the driver?s neck or head might turn to nod?yes? Roll is defined as tilting the head side-to-side, leaning forward or back. In one embodiment, pitch and yaw may be used principally to detect distraction.

“Referring to FIG. “Referring now to FIG. 7, an exemplary embodiment a system 702 is used for analyzing attention margin to prevent unsafe driving and inattentive driving. System 702 could also include or be included in any system described in this disclosure. System 702 could include a camera. This camera may include any camera, set of cameras, or combination thereof, as described in the disclosure. For example, system 702 may include a USB-connected camera 705 that contains visible sensors. These sensors include near-infrared (NIR), red, green, and infra-red (RGB) sensors, and/or infrared sensors. System 702 can be used to extract facial or ocular features or orientation using any of the other sensor as described in this disclosure. System 702 could include one or several audio input devices, such as microphones or near-infra red (NIR) sensors. One or more audio output devices can include any audio input device as described in this disclosure. System 702 could include one or several audio output devices, such as speakers or one or two microphones. One or more audio out devices may include any of the audio output devices described in this disclosure. Audio input devices can be combined or disposed separately. For example, at most some audio input devices and audio out devices could be parts of one electronic device that is part of system 702. Audio input and audio output devices can be combined or placed separately in a speakerphone 703, which could be any mobile device, telephonic device, or other device that can act as a speakerphone. Speakerphone 703 may be used for positioning a microphone and speaker at a location in a vehicle to communicate with the driver, such as on a visor near the driver. System may also include a computing device 707 which can be any computing device described herein, without limitation a processor unit 315, as well as any other device capable of delivering audio input and output devices. A computing device 707 may include a laptop computer, or any other device that allows for computation to run analysis and/or computation. This disclosure may include any analysis, and/or computation. Computing device 707, for example, may perform context analysis, combine such context analysis results with features extracted from a smart camera, and provide feedback and/or additional outputs to the driver, including without limitation audio feedback. System 702 speakerphone 703 or computing device 707 may include additional electronic devices or components. These additional processes and/or abilities may include, without limitation, telemetry data and map/routing information, audio/video recording capabilities, speech recognition and synthesis for dialog interactions with the driver. After reading the entire disclosure, anyone skilled in the art will realize that any component or capability included in smartphone 711 can be placed in another device or device in system 702, such as speakerphone 703, computing device 707, camera 705, or any other special-purpose electronic device with these components and/or capabilities. Smartphone 711 and/or any other devices that include one or more of the capabilities and/or components mentioned above, can collect additional sensor information, including motion-sensing data, 3D accelerometer, 3D GPS location and/or timestamps, as well as sensor information. Any sensor information or analytical results can be transmitted in the form received, raw, or processed information to cloud 709. A cloud 709 is a remote storage environment and/or computing environment that is implemented on one or multiple remote computing devices. This cloud may be operated by third parties, offered as a service or in any other form or protocol. Remote devices may be geographically dispersed and/or localized. Any component of system 702 or any part thereof, including any computing device 707 and smartphone 709, speakerphone 703 and/or camera 705, can be configured and programmed to perform any method, step, or sequence method steps described in this disclosure. Remote devices may be geographically localized and/or dispersed. System 702 and/or components of 702, including any computing device 707 or smartphone 709, speakerphone 703 and/or camera 705, can be set up to perform one step or sequence of steps repeatedly until the desired outcome is achieved. System 702 or any component of system 702, including any computing device 707 or smartphone 709, speakerphone 703 and/or camera 705, can perform any step/sequence as described in the disclosure in parallel. This includes, without limitation, any computing device 707 or smartphone 709, any processor cores or parallel threads. Tasks may be divided between parallel threads or processes according to any protocol that allows for tasks to be split between iterations. After reading the entire disclosure, persons skilled in the art will know the various ways that steps, sequences, processing tasks and/or data can be divided, shared, or dealt with using iteration and/or parallel processing.

Referring to FIG. 8 illustrates an example of a system 803 that analyzes attention margin to prevent unsafe and inattentive driving. System 803 can be used to run all processing on a smartphone, such as a smartphone 811. The configuration of smartphone 811 can be done in any way that is compatible with processing unit 315, as described above. The smartphone 811 can be configured in any way that is compatible with the configuration of processing unit 315 described above. It may also be used to perform any method, step, or sequences of method steps, in any order, and with any level of repetition. Smartphone 811 can be programmed to repeat a single step, or sequence, until the desired outcome is reached. Iterative and/or recursive repetitions may be used as inputs to subsequent repetitions. The outputs from previous repetitions may also be used as inputs to further repetitions. Smartphone 811 can perform any step or sequence described in this disclosure concurrently, including simultaneously or substantially simultaneously performing one step twice using two or more processor cores, parallel threads, or other processing units. Task division between parallel threads or processes may be done according to any protocol that allows for the division of tasks among iterations. After reading the entire disclosure, persons skilled in the art will know the various ways steps, sequences, tasks and data can be divided, shared or dealt with using iteration and/or parallel processing.

“Still referring at FIG. 8. Smartphone 811 can be connected, for example and without limitation, via a USB OTG to an input device. As an example, smartphone 811 could be connected to a visible+NIR cam 807. Smartphone 811 can connect to one or several components that provide vehicular analytics, data, and/or data. This may be done according to any description of collecting vehicular analytics, data, and/or data. For instance, smartphone 811 could connect to an optional OBD II and cellular-connected WiFi Hotspot 809. These devices may provide additional information from the vehicle bus (OBD II/CAN), and/or an alternative way to transfer data to the cloud (for instance, as shown in FIG. 7. System 803 can include audio input/or out devices as described above. This includes, without limitation, an optional Bluetooth speakerphone 805; the quality and loudness system-generated alerts may be improved and a better-positioned microphone to improve speech recognition accuracy.

“In an embodiment and with continued reference at FIG. 8. System 803 may employ a smart camera with RGB and NIR sensors and an infrared LED scanner to extract facial and eye features from smartphone 811. A Bluetooth connected speakerphone 805, or audio input/output devices such as Bluetooth-connected speakerphone 805, can be used to position a microphone/or speaker on a driver’s visor. Smartphone computation can perform all processes described above. For instance, smartphone 811 might run context analysis and combine the results with smart camera 807 features to determine driver’s attention span and provide audio feedback or other outputs based upon such determinations. System 803 or smartphone 811 can also be used to collect and/or transmit telemetry data, map/routing information, cloud services, such as weather and traffic, audio/video recording capabilities, speech recognition and synthesis, and/or dialog interaction with drivers.

“FIG. “FIG. 9 is a schematic illustration that illustrates how the capabilities of embodiments herein exceed all existing solutions for usage based insurance. It provides richer and more precise services.”

Referring to FIG. 10 illustrates an example of the architecture of a system such as system 307 or system 702, or system 803, as well as system 803, as previously described. The system may include a driver attention model unit 1015. Any hardware or software module that is embedded in, connected to, and/or operating on any computing device, as described herein, may be included. Driver attention modeling unit 1015 can be used to analyze features 1008 that describe driver facial data. This includes closed eyes, yawning and eyes directed away from the road. These features 1008 are extracted using visual data such as video feeds from driver-facing cameras 1001; any camera that is oriented to record or capture data about drivers as described in this disclosure. Without limitation, driver attention modeling unit 1015 can be used to analyze features 1009 such as verbal responses and/or responses, removal of hands off a steering wheel or the like. Driver-facing cameras may include any camera oriented to capture and/or record data about drivers, such as speech and gestures 1002 extracted from the driver’s speech 1001 or audio input devices. Without limitation, driver attention modeling unit 1015 can be used to analyze features 1010 from biometrics sensor 1003, which may include without limitation wearable biometrics sensors and/or sensors embedded into vehicles. These features 1010 could include features indicative of or measuring fatigue, stress, reaction at startling or frightening events, and the like.

“I am still referring to FIG. 10. System may include a driving model 1016; driving model 1016 could include any hardware and software module that is operating on, incorporated into, or connecting to any computing device, as described herein. This includes without limitation processing unit 315. System may also include an accident detection/prevention device 1017. An accident detection/prevention device 1017 can include any hardware and software module that is compatible with, incorporated into, or connected to any computing device, as described herein. Driver risk model 1016 or accident detection/prevention 1017 may analyze features 1011 from a camera 1004 on the road. These features can include features that describe and/or depict vehicles ahead, pedestrians crossing the road, cyclists and animals, as well as road signs posts and trees. Driver risk model 1016 can use any of the algorithms described in this disclosure for detecting presence, estimated speed, and direction (towards the vehicle ahead or an adjacent lanes, potentially on a collision course), to issue early warnings, before complete classification of the objects ahead. The embodiments disclosed herein can maximize the time the driver has between an early warning and a possible collision. This allows them to take appropriate action (brake, swerve, etc.) to prevent a crash. Driver risk model 1016 or accident detection/prevention device 1017 may analyze features 1005 from a rear facing camera 1005. These features can include, but are not limited to, any features described above that represent conditions outside the vehicle and/or driver such as tailgating vehicles too close. Driving risk model 1016 or accident detection/prevention 1017 may analyze features 1013, which can be retrieved from telematics data 1006 and include speed, acceleration, cornering, engine loads, fuel consumption, and braking. Driving risk model 1016 or accident detection/prevention device 1017 can analyze features 1014 from ambient 2007 data such as traffic and weather information.

“With reference to FIG. “With continued reference to FIG. Decision Engine 1020 can evaluate attention 1015 against risk 1016, and/or historical short- and/or long-term data 1018 about the driver’s performance during past similar situations in order to determine what type of feedback to give to the driver. Evaluation may include without limitation any machine learning model or process as described below, such as using training data to correlating attention 1015 with risk 1016 to alert levels. If vehicle connectivity allows connection to the cloud, 1021 may store and/or update historical geographic data 1019. To avoid distracting drivers, information that is limited to normal 1022 attention levels, such information as information using a first color 1030 to indicate normal status may be transmitted 1025. A driver may receive a more disruptive and/or noticeable feedback signal if their attention level is low (1023). For example, the lights may emit acoustic feedback 1031 to alert them 1026. Alternately, or in addition, a second or different light color 1030 could be used for drivers with marginal attention levels 1023. For example, a yellow light may be added to the lights to call driver’s attention 1026. A further intrusive or escalated feedback signal can be generated if attention is not sufficient 1029. For example, an alarm driver alert 1027 may generate a pattern of visual and audible alerts 1032, which may escalate if the condition continues. The pattern could include a third colour representing a different color than first or second colors. An increase in the volume and light intensity of audio output devices may be an indicator that there is an escalating problem. A dialog interaction 1028 can be used depending on the severity and urgency of the problem to communicate quickly to the driver the identified countermeasure and to notify them. A pattern acoustic alert may include a sequence or patterns of sounds and voices. This could be audio, voice and song, or chirp. A pattern spoken warning could also include sequences and/or patterns. If attention is sufficient or normal, output could include a steady green feedback light. If attention is not sufficient, output might include a slow blinking yellow feedback lamp and acoustic alerts. Where attention is inadequate, output could include a fast blinking, red feedback light and spoken instructions for correction. The system may periodically update driving stats and calculate driver risk profiles. System may periodically update all important information to cloud for statistical analysis by/trip, by/area or by/population. You should note that terms such as?low?, or?marginal are not allowed. It should be noted that the terms?low?,?marginal?, or?insufficient’ are not intended to indicate a hard three-level of events. These thresholds do not represent a single scenario level. There may be multiple threshold levels that correspond to multiple alert levels.

“Still referring at FIG. “Still referring to FIG. To enable complete reconstruction of the scene before and after the crash, video and/or sound clips may be combined with time and/or location information. All recorded data, including without limitation audio and/or video clip, location, times, and/or sensors, can be uploaded to cloud services or devices and/or stored in local and/or distant memory facilities. Data triggered by inattention events, such as crash data, can be recorded in a driving record and analyzed over time to generate statistics in the form of a driver profile 1033 that can be used by fleet managers or insurance carriers. On request from the driver, or at times when it is safe, analytics may be used to present trends and driving performance reports to the driver. This can help to motivate the driver to keep doing well. The data recorded above, including data from inattention events-triggered, data collected before, during and/or following crashes, and any other data used in driving data record 1033 may be used for creating entries of training data. This may allow machine learning, neural net and/or AI processes and/or outcomes to create, modify and optimize models and/or outputs that determine any analytical and/or determined outputs or results. Models and/or AI processes may also be used to generate collision predictions based upon one or several sensor inputs or analytic based analytic or analytic inputs or analytic inputs or analytic or analytic inputs or analytic or analytic or analytic or analytic or analytic or analytic or analytic or analytic or based on alytic or analytic or analytic or analytic or analytic or analytic or analytic or ana /or ana or ana and/or ana and/or based models or the like For further processing, driving data records and reports can be uploaded to the cloud 1034.

“In operation, and still referring back to FIG. 10. System may use visible or NIR camera pointed at driver’s face to analyze head posture, track eye movements and/or record driver and passenger seats in case of an accident. Audio input devices, cameras and/or other sensors can be used to implement speech and gesture interfaces for drivers to request or provide information via microphone, face, or hand gestures. The driver’s emotional state, mood, health, attentiveness, and/or mood can be monitored and/or analysed using biometric and/or Vital Signs data. This data may include without limitation Galvanic Skin Response (GSR), and/or Heart Rate Variability (HRV) data. It can be provided via a wearable bracelet or sensors on the steering wheel, as well as wireless evaluations of heart beat and breathing patterns. A forward-facing camera may be used to determine lane lines, vehicle distances, scene analysis, and recording. System may also use a rear camera for viewing, analyzing, and/or recording at the rear. The system may track and/or associate data obtained using accelerometers and/or GPS facilities. Other data could include vehicle data, such as VIN, Odometer readings and measures of rotations per hour (RPM) and/or engine loads, for example and without limitation via an OBD II connector.

“With reference to FIG. 10. System may collect and/or utilize in analysis data regarding traffic and weather conditions, day/night lighting, road conditions and in-cabin noises or voices to make any decisions, training data and/or other processes as described in this document. The system may use visual clues to detect hand gestures and/or determine distraction, drunkenness or drowsiness. The system may extract spoken words using speech recognition, natural language process (NLP) or similar; speech-related feature removal may be used to detect altered voice. Biometric feature extraction can be used alternatively and additionally to detect physiological and/or emotional states, such as fatigue, stress, response to fear/surprise, or stress. Any sensor outputs or analysis may be used to extract features of objects, such as vehicles and signs, poles, signs and pedestrians. Any sensor output or analytical output can be used to extract vehicle position and speed. Feature extraction can be used to determine the driving smoothness or aggressiveness of another vehicle and/or its containing system. To determine the ambient level of?harshness, system may use sensors and/or analytic process outputs. “The impact of driving stress on the environment”

“Still referring back to FIG. 10. System may employ machine-learning processes to create and/or use models and/or train data to generate outputs. Machine learning can, as an example, be used continuously to evaluate driver attention level, continuously evaluate driving risk and detect and/or forecast vehicular collisions or crash conditions. The system may extract features from the past driving habits and/or driving skills of a driver. The system may extract features from dynamically and/or past reports of traffic jams, dangerous intersections or ice patches, accidents, or other relevant information, such as data connections to cloud servers, or similar. The system may create an intelligent decision engine that compares the driver’s attention level to the level necessary to manage the risk condition. This could be done without limitation using machine learning, deep learning and/or neural net models. Alternatively, or additionally, decisions and/or determinations can be made based on the driver’s past performance and adjusted for changes in the day, such as measures of anger, nervousness, and fatigue. System may also provide real-time ambient data updates via Cloud Services. For instance, a phone connection can be used to obtain information. System may determine that the driver’s attention level is sufficient or better for the driving task. In this instance, system might provide a status update to the driver, similar to what was described above. The system may determine that the driver’s attention level is low and give proactive information to the driver. The system could determine that the driver’s attention level is not sufficient to perform the driving task. It may also generate proactive information to the driver that the attention is inadequate and take appropriate action. System may inform the driver about the reasons and offer suggestions to correct the behavior. These steps can be executed using any component or process that is suitable as described in this disclosure.

Referring to FIG. “Referring now to FIG. 11, an additional illustration of an exemplary embodiment a system as discussed herein is illustrated. This may include or be incorporated into any system as described by this disclosure. An image processor 1150 may be part of the system. Image processor can include any computing device, such as any device that is suitable for processing unit 315 described above and/or any software or hardware module connected to it. An image processor can analyze video content from a driver facing camera 1140. This could include any device that is suitable for use as an camera, such as any of the above-described driver-facing cameras. Alternately, or in addition to data from an infrared scanner 1141 or similar, video content may include data from image processor 1150. This optional aid may allow for greater accuracy in face rotation detection in dim or non-lit conditions. Near IR or RGB cameras can be placed facing the driver and/or back of the passenger seat. An Solid-State LED scanner may be used for depth sensing or eye gaze tracking. It can also scan and/or record visual information of the driver, passenger, and other persons. Speech engines may be included, including systems that can recognize speech and/or synthesize speech. System may also include a module to manage dialog 1152 which analyzes voice and sound and generates audio and verbal prompts via the speakers 1145. Microphones 1144/or speakers 1145 can include any device that is suitable for audio input or output. An example of an audio input device may be a beamforming microphone array to provide driver vocal input, speech ID verification and ambient sound analysis. One or more speakers may be used to provide acoustic/voice feedback for the driver. Any light output device may be used to generate light, including a 470nm blue LED array for retina stimulation or alertness restoration. System may also include a multipurpose button, which can be connected to speech engines, dialog management components, and/or modules 1152 Multi-purpose buttons may allow system to change interaction mode, request assistance, or enter emergency protocols, depending on the context and/or number of times it has been pressed.

“Still referring at FIG. 11. system may contain a main processing device 1151. This could include any computing device described in this disclosure, as well as any device that can be used as processing unit 315, as described above. The main processing unit 1151 can perform processing functions such as those created by image processor 1150. This includes without limitation detection and/or description head rotation, eyelid closure, mouth opening or the like. Main processing unit 1151 can process video from a roadway facing camera 1149. This may include any device that is usable as a camera according to this disclosure. It may also classify and identify objects ahead of the vehicle. Main processing unit 1151 can collect GPS 1157 information in order to geo-stamp all events and/or calculate speed. Main processing unit 1151 can collect and/or process 3D accelerometer and/or 3D gyroscope information 1158 to determine vehicle movement and forces involved with motion, collisions or the like. To control communication with the driver, main processing unit 1151 can interact with speech engines 1152 The activation of a stimulant light may be controlled by the main processing unit 1151. This could include, but is not limited to, a 470nm blue LED 1146 for attention stimulation. A multicolor LED ring lamp may also be used to provide visual feedback to the driver. The main processing unit 1151 can collect data from biosensors 1147 in order to determine fatigue and stress levels. To collect additional information about the vehicle, main processing unit 1151 can communicate with an OBD II connected device 1148. When connectivity allows, main processing unit 1151 can process electronic data records and synchronize to a cloud 1156. Main processing unit 1151 can connect to a smartphone or wireless speakerphone to receive route, traffic, weather information and interact with the driver.

“With reference to FIG. 11. system may contain a system memory 1152. This memory may be any type of memory storage device or component described in this disclosure. Main processing unit 1151 may use system memory 1152 to store processing information. To store processed information, system memory 1152 may be used by main processing unit 1151. In addition to accelerometer/gyroscope/compass information 1158 being available in the main unit as described above, system may process similar information using a mobile app installed on a phone 1155 or other mobile device. Phone 1155 and other mobile devices may return information about the relative motion of the phone 1155 within the vehicle. This is used, for example, to determine possible distraction conditions if the driver holds the phone. Any wired or wireless communication protocol can be used to communicate with any component of the system, such as Bluetooth LE, Bluetooth 3.1, Bluetooth LE, WiFi or Bluetooth LE.

Referring to FIG. “Referring now to FIG. 12, an exemplary embodiment of a system that may include or be included in any system as discussed above is illustrated. A driver-facing camera 1201 may be used to create a video stream for a feature analysis unit 1217. Any computing device that is suitable for processing unit 315 and/or any module of hardware or software included in, operating on, or in communication with such computing devices may also be part of the system. Deep learning, neural net learning and/or machine-learning may be used to extract information about the head, eyes, and/or lids. The features can then be analysed by a drowsiness analyze unit 1219, or a distraction analysis un 1221 which may determine their respective severity levels. Each of the drowsiness and distraction analysis units 1219 and 1221 can be used as any computing device described herein. A driving risk estimation engine 1223 may be included in the system. It can be implemented using any computing device described herein, including any device that is suitable for processing unit 315 and/or any hardware/or software module that is incorporated into, operating on, or in communications with such a device. Driver risk estimation engine 1223 can provide information about vehicle dynamics 1203, traffic/weather/road condition 1205, GPS/route info 1207 and/or road facing cam 1209 to help determine risk and escalate urgency 1225 if the driver does not take action. The driver’s skills and experience may be used to calibrate risk estimation using machine learning and precomputed risks models 1227.

“With reference to FIG. “With continued reference to FIG. 12, system may include main decision engine 1233. This may be any computing device as described in this invention, including any device that can be used as processing unit 315 and/or any hardware/or software module that is incorporated into, operating on or in communication with such computing devices. The main decision engine 1233 can be used to gather information about distraction levels 1221, sleepiness levels 1219, and/or risk levels (1225) as described above. This may include taking into consideration user preferences 1241, leveraging mandated behaviour guidelines 1237, 1239, and relying upon decision models 1243, Machine Learning, which may all be implemented according the procedures described in this disclosure to determine what messages are to be sent to the user. A dialog interaction engine 1235 may be included in the system. It may be any computing device described herein. This includes without limitation any device that can be used as processing unit 315 and/or any hardware/or software module that is incorporated into, operating on, and in communication with such computing devices. The decision engine 1233 may trigger dialog interaction engine 1235 to send prompts to a driver to use sound and speech synthesizer 1231 to drive speaker array 1215. Mics 1213 may record driver’s reactions, comments, and requests to create actionable texts via speech recognition and NLP 1229. This will help to evaluate driver responsiveness. Dialog interaction engine 1235 can use dialog history 1245 to assess and restore driver’s attention. It may also use trip data 1249, short-term driver information 1247 and/or driver skills 1250 to determine the type, pace and length of the dialogue. Long-term driving data 1250, statistics about dialog interactions 1235, may be used to assess driver performance effectiveness. This includes, without limitation, the ability to take corrective actions in an appropriate manner, responsiveness to system generated guidance 1252 and compiling driving risk profiles 1255 and driver performance trends 1255. The blue LED light 1211 may be used by the dialog interaction engine to produce brief blinking patterns to assess driver alertness. “Imitating light patterns with corresponding blinking eyelids.

Distraction detection may be performed by embedded systems such as the ones described above. Distraction detection can be performed by glancing at a central-stack screen display, touching a center-stack screen display, reading text on cradle (hands-held), touching the screen on mobile phones (hands-held), text-typing using mobile phones (hands-held), eating, drinking, smoking, interactions with other passengers and singing, combing/shaving hair and/or applying make-up.

“Embodiments may be used in drowsiness detection using the systems described herein, including Stages 1-2-3. The embodiments of the system described herein can perform driver identification. This includes without limitation visual face analysis, voice ID verification and/or driving style behavior. The embodiments of the systems described herein can detect passenger presence and behavior. This includes, but is not limited to, analysis of passengers’ interactions. Embodiments of systems as described herein may perform theft detection and recording, engine-off vibration analysis, low-frequency video sampling, driver detection forward scene analysis, forward distance/collision detection lane departure detection, vehicles recognition and distance/delta speed measurement driver risk profiling, tailgating, late braking, hard braking, hard acceleration smooth cornering, smooth acceleration, gradual braking lane-keeping accuracy, swerving, eyes-on-road vs. mirrors vs. elsewhere ratio context risk evaluation, acceleration/deceleration/cornering speed (relative to posted limit), travel-time distance from vehicle(s) in front traffic and accidents ahead, time/distance to exit ramp, and/or weather and temperature conditions road pavement conditions”

“Embodiments described herein may use machine-learning to create models that can interpret a multitude of data in a vehicle and make real-time decisions. Initially thresholds for acceleration (longitudinal, lateral?acceleration/braking, cornering), speed, distance (in sec) from vehicles ahead, distance from lane markings, times for eyes taking away from watching the road to check rearview mirror, instrument cluster, center dashboard, . . . The criteria for speed, skill, and the like can be determined using simple rules and common sense values, and/or values derived from earlier iterations with other vehicles and/or drivers. To fine tune initial models, data recording may be done in driving simulators. Convolutional neural networks will then be used to extract visual characteristics from drivers’ behavior.

“As the system is being tested and data collected may be done on the road, statistical models that describe driver behavior may be improved through machine learning. These models may include multiple sensors (motion visual, audio, biosensors), which can be correlated with multiple sensors (audio, motion, visual, biosensors). Experts may initially annotate data, which could include information from drivers who were observed. As time goes by, processes can become less monitored. A subset of driving data can eventually be used to feed self-learning algorithms that will ensure continuous improvement of the system.

“Description of EDR.”

A vehicle driver may have data, including an electronic driving record (EDR), that they have accumulated while driving. The driver has the power to suspend/revoke certain access rights at any moment, even while driving. The driver can restore access rights dynamically during driving, or at the end a trip. Before information is synchronized with network (if it is not connected live), the driver can also suspend/revoke select access rights. The core concept of driver data management involves the generation, maintenance, and distribution EDR according to data owner preferences and insurance company incentives. EDR Bank can provide driving data to those who are the owners of the data. Car insurance companies may have access rights to the data. EDR access conditions include what type of data is provided. When the data is given, to whom it is provided, and under what conditions. Two types of data can be collected while driving: data from the car and data external to it. Driving data can include driver id, location and speed as well as acceleration, braking and cornering. crashes). EDRs can store any or all driving data. This includes live and/or historical data. Later, these data can be processed and analyzed, and then distributed. These embodiments may allow drivers to control who, what, and when access to EDR information. EDR data can be stored in secure cloud storage. For instance, it may be compiled from data uploaded to cloud services. Data access may be managed by drivers and other authorized users, including without limitation insurance carriers and healthcare providers. EDR data may be owned by users, such as drivers or fleet owners. They may also have the power to determine who gets access to it. Alterations in insurance benefits may be triggered by changes in access rights. A driver can authorize data access on a continuous basis up to the time authorization is revoked. To negotiate service terms, sharing individual parameters can be used (e.g. Discounts on car insurance premiums may be possible. This could lead to individual financial penalties or discounts. Individual car insurance companies may require that shared parameters be grouped to meet their requirements. In order to offer competitive insurance services, car insurers might be invited to compete among themselves based on the EDR parameters. This may limit the options to drivers who are willing to share.

“EDR” may be implemented in the following exemplary embodiments: EDR may be managed using a mobile app on a smartphone that is owned by a driver. It may also be stored on a dedicated device mounted on the windshield or dashboard, or on a standalone unit mounted on the dashboard (e.g. Android Auto, CarPlay, or Infotainment System. In order to adjust driving behavior and style, a driver can be alerted when certain thresholds are exceeded. Drivers can determine EDR and read access rights via other mobile apps. This includes other parties such as the manufacturer of the vehicle or Tier-1 suppliers. UBI car insurances The configuration process can be done on a computer/laptop, via Web, or on the mobile device that contains the EDR database. This allows for the selection of access rights by specific parties to certain sets of parameters. A smartphone or other computing device may be able to securely and/or encryptedly synchronize EDR content (live and historical) to a cloud service according to the above. This may give the insurance company no information regarding risk, but may expose the driver to unapproved monitoring. The collected information could be very limited if the driver does not speed or brake frequently. If there are no driving offences, it may even be zero.

“An overview and summary of the embodiments disclosed herein might include methods to automatically monitor driver distraction and generate context-aware safety reminds. The embodiments disclosed herein may use visual stimulation (via HUD), to assess driver’s attention, responsiveness and focus. The embodiments disclosed herein can be used to make decisions based on a variety of inputs, including user connection state and user location, user locale, associations learnt through prior observation, which are not explicitly specified by the user, as well as external factors such weather, destination status, transportation factors, weather, and other factors. The embodiments disclosed herein can include visual analysis of the scene ahead to verify driver’s attention (Exit Sign Question). These embodiments may be used to automatically learn the user’s behavior. These embodiments may also include the ability to poll users for feedback.

“Embodiments described herein could include a program that the user might choose to add to their application portfolio. The setup dialog may allow you to configure your device using the embedded methods. The embodiments disclosed herein can be used to modify the device, such as adding, removing or changing. These embodiments may also include methods for analyzing patterns and allowing users to review them and make modifications. These embodiments may also include methods to analyze reminders. These embodiments may also include methods to identify redundant reminders that can be discarded.

“Embodiments described herein could include methods to identify reminders that are not in accordance with the situational context. These embodiments may also include methods to detect reminder gaps. These embodiments may include methods for analyzing inside and outside video to reconstruct accidents or assess damage.

“Embodiments disclosed herein may include means of using a clamp on box containing camera/lights/LEDs/mike plus detached camera facing forward. Modified rearview mirrors with translucent glass may be one of the embodiments disclosed herein. Some embodiments disclosed herein could include the use of correlation between multiple cameras.

“Embodiments described herein could include methods to analyze and monitor driver drowsiness. Blue light may be used to reduce melatonin levels to combat drowsiness while driving. Embodiments disclosed herein may include means of using colored lights and acoustic feedback on attention level and attention triggering events (red-yellow-green), using constant/intermittent pattern, and/or using intensity adjusted to internal illumination level.”

“Embodiments described herein could include methods to monitor cabin behavior of passenger and driver in order to flag potentially dangerous behaviors. These embodiments may be used to identify dangerous behaviors and take appropriate action, including without limitation through sending alerts. These embodiments may include methods to identify dangerous objects, such as weapons, and take actions such as sending alerts without limitation.

“Embodiments described herein could include methods for detecting potentially hazardous health conditions. The embodiments disclosed herein could include wireless bracelet recording of HRV or GSR. The embodiments disclosed herein could include wireless (non touch) measurement of HRV or breathing. These embodiments may be used to collect bio/vital information to use with onboard diagnostics to identify situations that need specialist attention. Some embodiments disclosed herein could include the provision of an automated virtual personal nurse? Assistance to driver with chronic condition (recommended actions, monitoring for known risk conditions). The embodiments disclosed may include audio-visual speech recognition methods to increase robustness in noisy environments.

“Embodiments described herein could include methods to improve driver risk evaluation based upon changes in motion energy while they brake (same deceleration at higher speeds much more risky than at low speeds). Some embodiments may also include virtual coaching methods (e.g. Keep proper distance, avoid slowing down late, keep in right lane and stay in center of lane, optimize your turn trajectories). These models were developed using data from professional drivers and large numbers of drivers living in the same area. The methods disclosed herein can be used to analyze cloud of eye gaze tracking points in order to predict driver alertness. This allows drivers to distinguish between fixations that are caused by high levels of interest and those that are caused by drowsiness/lack attention. Embodiments disclosed herein may include methods of using Vehicle-to-Vehicle (V2V)-like information exchange social networks such as Waze, to inform nearby drivers about fitness/distraction/drowsiness/ . . . To increase safety margins (distance between cars, greater attention to unanticipated maneuvers, etc.) of a driver. . . ). The embodiments disclosed herein could include methods to extend driver attention monitoring for use in trucks, motorcycle helmets and trains (for conductors), as well as planes (pilots). These embodiments may include methods to extend driver attention evaluation at home, in the office, and at schools (education). Some embodiments disclosed herein include audio-visual recognition methods to detect suspicious activities in the cabin (e.g. “Screaming voice in association with body movements across the seats”

“Embodiments described herein could include methods for usage-based privacy and insurance security, including methods of collecting driver data to automatically monitor driving context. Monitoring of context includes detection and attention of driver behavior, car parameters, internal or external, and driver behavior. The embodiments disclosed herein could include a program to monitor driver behavior. The embodiments disclosed herein can include hardware and software that measure driver attention. Some embodiments disclosed herein could provide driver feedback in real-time. methods to automatically learn the user’s behaviour. Embodiments disclosed herein may include means to automatically poll user to evaluate responsiveness/alertness in presence of symptoms of drowsiness. These embodiments may also include methods for managing data access policies to set preferences for drivers and provide feedback. The embodiments disclosed herein could provide a means for defining data access rights with varying degrees of flexibility. These embodiments may allow for dynamic data access rights. Some of the embodiments disclosed in this document may allow for the suspension/revocation of certain access rights, even while driving. Access rights can be restored dynamically by the driver while driving, or at the conclusion of a trip. The disclosed embodiments may address different aspects of EDR data and its varied nature (data within and outside the car, driver behavior). The driver may be able to identify which EDR data is available and when. Some embodiments disclosed herein could include the use of sensors to collect and analyze driver behavior data. These embodiments may also include the creation of driver behavior models and attention models. “Embodiments disclosed herein can include the ability to process EDRs dynamically, and grant access rights for EDR data on-the-fly.”

“Embodiments described herein could include methods for delivering the product of the above embodiments. Some embodiments disclosed herein could include methods to collect and store driver data. These embodiments may also allow for the analysis of driver data in real time. The disclosed embodiments may allow insurance companies to bid for the driver’s company based on privacy settings. These embodiments may allow for insurers to compete for the driver’s company. Drivers may be able, given certain private data, to choose the best-suited insurer or combine insurance companies on the spot. These embodiments may allow policy premium pricing based on hour coverage, driving habits (e.g. For car rentals. The embodiments disclosed herein can perform dynamic control logic that determines multiple access patterns for the same user depending on the context. The embodiments disclosed herein could allow an insurer to bypass information protection locks that have been set by the driver in certain circumstances (e.g. It detects serious accidents and verifies that the driver is able to consent to disclosure of location. Police and ambulance can be used to arrive at the rescue. In the event that the driver is unconscious, there may be insurance to bypass the lock mechanism. This could save the driver’s life. The driver may be able to temporarily or quickly change privacy settings in emergency situations using the embodiments disclosed herein.

“Embodiments described herein may include methods for providing Qi wireless charging devices attached to car windshield: transmitter wire embedded in windshield at array at top or bottom to allow multiple devices to be used or to enable multiple positions. The receiver coil can be attached to the device via a docking support and sucking cup. smartphone).”

“Embodiments disclosed herein may include methods to anonymize video recordings in car while preserving attention/distraction/drowsiness extracted features. Face of driver may be analyzed for head pose pitch/yaw/roll, eye gaze tracking left/right/up/down, independent eyelids closing frequency/duration/pattern, mouth shape/opening/closing, lips shape/opening/closing. All the collected features can be used to control rendering a synthesized facial, in sync or short delay with the original expressions and movements. The hybrid video may also include real and synthetic contexts, as well as the driver’s actual face shape/opening/closing, and lips shape/opening/closing.

“Embodiments may be used to evaluate user attention when they listen or watch an advertisement message. Visual and audio analysis of a user’s reaction to a message may be used to rate pleasure/satisfaction/interest or lack thereof and confirm that the message has been received and understood, or not; this may be particularly useful in a contained environment like a car cabin (especially in a self-driving or autonomous vehicle) but may be extended for use at home or work, where one or more digital assistants have the ability to observe a user’s response to ad messages. It can also be used on mobile phones with the front-facing camera, subject to certain limitations.

“Embodiments described herein may include methods to evaluate user responsiveness to guidance/coaching to determine if communication strategy works. A short-term evaluation (a few dialog turns) may demonstrate the ability of the user’s attention deficit to be corrected and to regain control over their minds. Evaluation may be used to assess user coachability and behavior improvement over the long-term (days-weeks). The system may prompt users to use the system more frequently and with fewer prompts.

The “Embodiments” described herein are intended for use in a car but can also be used on mobile devices, home and work.

“Embodiments described herein may include methods to automatically notify of changes in driving regulations (e.g. Speed limits, no turn-on red signal, limited or non-use of cellphones, or partial functions. . . ) When crossing state borders. Communication can be verbal or written. However, users may request clarifications. Driver monitoring systems may share changes in traffic regulations to encourage compliance with these rules.

“Embodiments discussed herein can enable safer driving by giving real-time feedback to the driver about potentially dangerous conditions to avoid accidents due to inattention or impaired health. These embodiments may be advantageously complete due to the holistic nature of real-time data analysis (driver’s face, eyes, health condition, and outside contextual data). To model the dynamic context in multiple dimensions and provide accurate feedback on the recommended actions in real-time, it may be necessary to collect extensive data and to develop sophisticated algorithms to create personalized models that will benefit the driver and keep him/her safe. The holistic data analysis may include biosensors. The multi-dimensional context of the driver’s stress and fatigue can be determined using visual inputs and telemetry data. These embodiments may be used to evaluate the driver’s attention and driving skills. They can also help to identify unusual driving behavior and adapt to it.

“Embodiments, systems and methods described in this disclosure can save lives by monitoring and modeling drivers’ attention to avoid distractions and drowsiness. By creating driver profiles, embodiments can help insurance companies to distribute insurance premiums more fairly. Embodiments can be used by rental and ride-sharing companies to monitor and model driver and passenger behavior. Fleet management companies may be able to efficiently manage their trucks fleets with Embodiments by modeling and monitoring driver behavior and receiving real-time feedback. Embodiments can help parents of teenager drivers to keep them safe and lower their car insurance by monitoring their driving habits and applying virtual coaching. Embodiments can help protect driver and passenger data and allow each user to grant permissions to use it. They control generation, maintenance, and distribution of EDR according to data owner preferences and incentives. Embodiments can be used to monitor drivers’ health and detect any emergencies in advance. Embodiments can be used to facilitate the transition to self-driving cars.

“In short, embodiments have the potential to save lives. Embodiments can make driving safer and more affordable, as well as making insurance rates less expensive. Embodiments can improve driving style and performance scoring. Embodiments can provide safety and protection for drivers. Embodiments can help with brand risk management, risk mitigation, and safety. Embodiments can keep novice and teenage drivers alert. With Virtual Coach, Embodiments can improve driving skills. With opt-in technology, Embodiments can keep driver and passenger data safe. Maintenance and distribution of EDR is done according to the preferences and incentives set by the insurer. Embodiments can monitor drivers’ health to detect emergency situations, particularly for those with chronic conditions or the elderly. Embodiments can make it easier for car-drivers to hand over their vehicles in a safe and secure manner. Embodiments can provide an accurate and effective evaluation of driving risk. They evaluate driver’s performance against traffic and road conditions. Embodiments can give advance warnings to drivers and provide feedback to car insurance companies. Embodiments can also assess a driver’s response. Embodiments can reduce the chance of accidents and may encourage good behavior from drivers and passengers.

Referring to FIG. 13 illustrates an example of a system 1300 that uses artificial intelligence to monitor, evaluate and correct user attention. System 1300 can be added to or included in any system as described in FIGS. 1-12. One or more of the elements of system 1300 can be deployed in a vehicle according to the above description. Alternately, or additionally, system 1300 could be implemented in a self contained unit that can be carried by a user as they walk, operate a vehicle, or do other activities. An embodiment of system can augment or simulate the initial attention-direction processes used by humans to detect apparent motion using peripheral vision, and then direct focal gaze towards that motion. The brain has specific cells that can focus on images taken by our eyes, even unintentionally, in order to determine which areas require more attention. System 1300 could detect apparent motion in a field, such as one captured by a camera. This disclosure may alert the user to the apparent movement. System 1300 could emulate peripheral vision detection by notifying the user immediately of any apparent motion detected. This alert may be done before the user performs further steps like object identification, classification, collision detection, or the like.

“With reference to FIG. 13. System 1300 also includes a forward facing camera 1304. Forward-facing camera 1304, which is used in this disclosure is a camera that is oriented away form a user. Such imagery captured by forward facing camera 1304 is indicative of conditions outside of the user’s control. It may also be held or mounted in front of a person who is standing, bicycling, or walking forward. Forward-facing camera 1304 may be used in any way that is suitable for use as either a camera or a camera/camera unit, as described here. 4. This includes a front camera 4035, and/or rear cam 4031. It also includes a near IR camera (as described above in reference FIG. 5. A camera 705 described in FIG. 7. A camera 807 described in FIG. 8. A road-facing camera 1004 or rear-facing cam as described above in regard to FIG. 10. A forward-facing camera 1149, as described in FIG. 11 and/or a camera that faces the road 1209, as described in FIG. 12. Forward-facing camera 1304 could include a camera that is integrated into a mobile phone or smartphone, as described above.

“Still referring back to FIG. 13 Forward-facing camera 1304 can capture a video stream; it may also include any sequence of video data, as described in the reference to FIGS. 1-12. A video feed can include a sequence of samples and frames of light patterns that have been captured by forward-facing camera 1304’s light-sensing or detection mechanisms. These frames and/or samples may then be displayed in a sequential order, creating a simulation for continuous and/or sequential motion. Forward-facing camera 11304 is designed to capture video feed from a field. The field of vision is the area within which forward facing camera 1304 captures visual information. For instance, forward-facing cameras 1304 can focus light onto forward-facing elements 1304 using lenses and other optical elements. Forward-facing camera 1304 can capture video feed on a 1308 digital screen. A ?digital screen 1308? This disclosure uses a data structure that represents a two-dimensional spatial array consisting of pixels. Each unit of optical data is sensed by the camera’s optical sensors. Each pixel can be linked to a set of two-dimensional coordinates (e.g. Cartesian coordinates).

Refer to FIG. 13. System 1300 contains at least one user alert mechanism 1312. A device or component that can generate a user-detectable signal (e.g., a visual, audible and tactile signal) is at least one of the components comprising the user alert mechanism 1312. Any mechanism that signals to and/or engages attention of a user may be considered an alert mechanism. A speaker 328, all devices that produce sounds and/or vocal prompts 321, phone 409 speakers, display, lights and/or vibrators, as well as any device or component capable of generating light colors 1030, 1031, 1032, 1032, 1031, 1032, 1032, 1032, 1032, 1032, 1032, 1032, 1032, 1032, 1032, 1033, 1032, 1032, 1032, 1032, 1032, 1031, 10321, ring, speaker array 1215, 1215 and/or blue light/or color LEDs 1211 and/or blue light/or color LEDs 1211 and/or blue light and/or blue and/or LEDs 1246, 1211 and/or blue and/or blue and/or blue and/or blue and/or blue and/or blue and/or blue and/or blue and /or blue and/or blue and/or blue and/or blue and/or blue and/or blue and/or 1215 and/or 1215 and/or 1215 and/or 1215 and/or 1215 and/or 1211 and/or 1215 and/or 1211 and/or 1211 and/or 1215 and/or 1215 and/or 1211 and/or 1215 and/or 1215 and/or 1211 and/or 1211 and/or 1211 and/or 1215 and/or 1211 and/or 1211 and/or 1215/or 1211 and/or 1215 and/or 1215 and/or 1215 and/or 1215 and/or 1215 and/or 1215 and/or 1216 and/or 1215 and/or 1215 and/or 1215 and/or 1215 and/ or 1215 and/ or 1215 and/ or 1215 and/ or 1215 and/15 and/ or 1215 and/ or 1215 and/ or 1215 and/ or 1215 and/ or 1216 respectively. An alert mechanism can include headphones and/or headsets connected to a mobile phone or other computing device. A user alert mechanism 1312 can be configured to send a directional warning to a user. A directional signal can be any signal that indicates to a user in which direction they should focus, or to which point they should turn their gaze.

“Still referring back to FIG. 13. System 1300 contains a processing device 1316. Processing unit 1316 can include any computing device described in this disclosure, such as a processor unit 315 or similar. Processing unit 1316 communicates with forward-facing cameras 1304 and at most a user alarm mechanism 1312. Processing unit 1316 can communicate with forward-facing cameras 1304 and/or a user warning mechanism 1312 by using any wired or wireless connection and/or communication protocol, as described in the reference to FIGS. 1-12. The processing unit 1316 can be configured and/or designed to perform any method, step, or sequence, according to any embodiment in this disclosure. It can also be used in any order with any level of repetition. Processing unit 1316 can be set up to repeat a single step, or sequence, until the desired outcome is reached. Iterative repetitions may be used as inputs to subsequent repetitions. Processing unit 1316 can perform any step or sequence described in this disclosure simultaneously, including simultaneously or substantially simultaneously performing one step twice using two or more processor cores, parallel threads, or other processing units; tasks may be divided between parallel threads or processes according to any protocol that allows for the division of tasks among iterations. After reading the entire disclosure, persons skilled in the art will know the various ways steps, sequences, processing tasks and/or data can be divided, shared or dealt with using iteration and/or parallel processing.

“With reference to FIG. 13. System 1300 also includes a screen location and a spatial location map 1320 that is operated on the processing unit 1316. The screen location to the spatial location map 1320. This disclosure describes a data structure that links locations on a screen 1308 to locations within a field of vision. It allows a system to directly retrieve a spatial location from a screen location to a spatial location without the need for computer vision or classification tasks like object identification, edge detection, and the like. As used in this disclosure, spatial location may refer to a place in three-dimensional space. This could include, without limitation, a location that is defined in a Cartesian coordinate system or a threedimensional polar coordinate set, and/or using vectors in a 3-dimensional vector space. A location in a projection from three dimensional space onto three dimensions such as a two-dimensional Cartesian and/or vector-based coordinate systems and/or a vector direction, as well as any other three-dimensional Any structure that is useful in retrieving one datum from another may be called data structure. This includes databases, key-value stores, key-value tables, vector and/or array structures, hash tables, databases, key/value data stores, and so on. As an example, cell identifiers or pixel IDs can be used to retrieve spatial locations.

“Still referring at FIG. “Still referring to FIG. 13: Processing unit 1316 or forward-facing camera 13304 may divide digitalscreen 1308 into multiple sections or cells for purposes of mapping screen locations and digital locations using screenlocation to digital location map 1320, and/or generation alerts as discussed in more detail below. Processing unit 1316, forward-facing camera 1304 and/or 1308 can divide digital screen 1308 into multiple cells or bins. These may represent regions bordering each other and may be rectangular, hexagonal or any other type of tessellation; each cell and/orbin may correspond to an identifier from a plurality cell identifiers. For example, the coordinate of a modified pixels can be used to indicate its location. Alternately, or in addition, a cell or bin identifier may also be used to indicate whether a particular pixel is located and/or a plurality.

“Alternatively, or additionally, and continuing with reference to FIG. 13. Division of digital screen 1308 could include identification of one or several regions of interest within digital screen, and division of the digital screen into regions. A central section of digital screen 1308 may include the identification of one or more regions of interest within digital screen and division of digital screen into regions of interest. A further, non-limiting example is that one or more areas of secondary significance may be identified. For instance, the central section or portion of digital screen 1308, which may include one or more regions, may correspond to lanes next to the vehicle. Regions of secondary importance might have a higher threshold to trigger an alert, a lower or different degree of urgency, escalation, or triggering criteria, than regions of high importance. One or more areas of low importance can be identified. This could include, but is not limited to, the regions on the right and left of the digital screen. These regions may correspond with objects along the roadside, such as trees, buildings and pedestrians on sidewalks. Regions of low importance might have a higher threshold to trigger an alert, a lower or different degree of urgency,/or escalate,/or different triggering/or escalation criteria. Alerts may not be generated prior to object classification. After reviewing this disclosure in its entirety, persons skilled in the art will realize that there are multiple importance levels for different regions or sections of digital screen 1308, and regions with any particular level of importance may only be one region or multiple areas. Additionally, any location in digitalscreen 1308 may correspond to a certain degree of importance for alert and detection in such regions.

“With reference to FIG. “With continued reference to FIG. Alternately, forward-facing camera 1304 and processing unit 1316 may be configured to detect one or more features within the field of vision. This will allow the division of the digital screen 1308 according to the identified feature. A feature could include lanes, divisions among lanes, sides or right-of-way on the vehicle, or any other type of information.

“Still referring back to FIG. 13. Processing unit 1316 or forward-facing cam 1304 can be used to calibrate processing device 1316 or forward-facing cam 1304 to a vehicle position to determine the relative positioning and orientation of forward-facing 1304 relative to the road. Calibration can be done in conjunction with, or during, any of the method steps and/or processes described in this disclosure. Calibration can be done with respect to a “vanishing point”, as described in more detail below. Alternately, or additionally, calibration can be performed using other features in the field of vision than a “vanishing point”.

“With reference to FIG. 13. Processing unit 1316 and/or forward facing camera 1304 can attempt to determine the vanishing point of a road to be used as a reference. A ‘vanishing point’ is defined as the following: A?vanishing point? is a place in a camera image where all parallel lines seem to converge. VP can be used to perform perspective transformations on the road. A VP may also be used to define a horizon. This allows for image processing to eliminate the sky. One or more types of calculation can be used to calculate VP. Edge detection is one example of a method that can be used to calculate VP. Edge detection may try to exploit high-contrast edges in an image. Such methods can be predicated upon the presence of predictable straight elements such as telephone lines or lane markings. Another approach is to use texture gradients and sliding windows in an image. Another approach is to use Haar features in an imaging to locate a road. This method could be very similar to popular face detection methods. Texture-based methods can produce better results than simple edges methods in some instances, but they may be more cumbersome.

“Still referring back to FIG. 13 is a middle ground between texture-based and edge-based methods. This may be achieved by using a region based algorithm with road region models. The method may use a triangle or trapezoidal model to show the expected shape and location of roads. A trapezoidal or triangle model can be used to calculate the average RBG pixels value of the road. From this RBG value, a customized saliency mapping is created for an image such as one captured from a video feed. This converts colors to 0. The remainder of the image can be scaled to 0 to255, depending on the euclidean distance from the average road color. This creates a grayscale saliency picture. After normalization, the road may appear black. The grayscale saliency image may be binarized in a subsequent step. The forward-facing camera 1316 and/or processing unit 1316 may use an Otsu threshold for the saliency images as well as a K-means clustering. Where k=4, the Otsu threshold can produce a binary image while the k?means algorithm might produce a segmented picture of four regions. The image can be binarized by setting the remainder of the image to 1. A logical “and” is used to create an image for the next step. A logical?and? operation can be used to produce an image for a subsequent step. Otsu may be a quick and liberal method to determine a road area. A k-means approach might be more conservative and slow. The segmentation is computationally intensive because the k?means algorithm is the most complex. You can alter the number of iterations that the algorithm runs to adjust the speed of the k?means computation. Experimentally, five iterations yielded good results. It was also determined that two iterations are possible if speed or computational efficiency are desired. However, quality can be maintained while speed and efficiency can be increased by two iterations. Processing unit 1316 and/or forward facing camera 1304 might not be able to perform a k?means algorithm. Alternatively, the Otsu method could work but may cause overshooting.

Referring to FIG. 14 illustrates an example sequence of images that is generated during road segmentation. A normalized image (b) can be created from an original image (a). A binarized image may be created by applying Otsu?s threshold algorithm. A modified binarized picture may be created by applying k-means segmentation on a binarized photo (c). The image may be further modified by an inverted sup Otsu or k-means method. A further modification to the image (f) may be possible through morphological operations. You may extract significant counters (g). You can overlay the results of (a-(g) on the original image (h).

“In an embodiment and still referring back to FIG. 14), a method of VP detection could include Hough transforms along the edges of a road segment, as well as calculating texture grades. FIG. FIG. 15 illustrates an alternative approach. A road segment algorithm creates a triangular shape towards vanishing points. An x-axis coordinate for the VP can be defined as the column position of an image that contains the most pixels representing the road. The y-axis coordinate for the VP could be the position of the first road-representing pixels starting at the top-quarter of the image. Since the VP is unlikely to change significantly over the course of a trip a calibration phase may be used to determine the average position of avanishing point. An embodiment may use a sense of the camera’s position relative to the road to aid in the detection of lane and concern objects.

Referring to FIG. 13. Forward-facing camera 1304 or processing unit 1316 can identify one or more areas of interest, as discussed above to isolate content immediately ahead of the car and peripheral information in adjacent lanes. A smaller ROI can allow for faster processing of collision detection. Forward-facing camera 1304 or processing unit 1316 can identify regions of interest by using one or more objects in the video feed. For example, forward-facing camera 1304 may determine the location of lane markings along the road. This may require a multi-step process, such as estimating the area of the road, estimating the vanishing point of camera, and/or perspective transformations.

“Still referring back to FIG. 13 lane detection can be done using any method. This includes identifying lanes on a road using one or more preprocessing steps and then performing a perspective transformation to create an aerial view. Further detection can be made with the perspective shift in an embodiment. Color segmentation may be used to identify yellow and white lanes. A Canny threshold may also be used to create a detailed edge map. The Canny edge map might capture more detail than is necessary. To remove clutter, potholes and other details, anything above the VP may be deleted. Any information in the road area that was segmented according to the above can also be deleted. Because the color of well-painted lanes can be different enough from that of the road, it may be possible to preserve them. A Canny threshold may capture only the edges of lanes in an embodiment. An optional color segmentation scheme can be used to improve robustness. The image is converted into an HSV color space, and pixels in a yellow or white range are extracted. These color segments can be added to an image, such as using a logic?or? operator.”

“Continued reference to FIG. 13 may allow for a perspective transformation in relation to the VP, creating. This could create an aerial view of the image. An embodiment may allow for an aerial view to be processed using conventional image processing methods. A projection histogram of the x-axis may be made from a number of pixels representing each lane in the transformed view. Local maxima may also be included in this histogram. To remove local maxima that are too close together, a hard-coded constant may be used. This may help to eliminate multiple lanes in the same lane. The position of a local minimum can be saved and added into a rolling average for the lane’s position. The area surrounding the local maxima of the perspective image can be removed. All pixels in the lane may be captured and stored in the restricted queue data structure. This may pop an element from the queue’s front once it exceeds a predefined limit. In one embodiment, the limited queue may serve as a way to preserve information across multiple frames.

“Still referring back to FIG. 13 may be used to collect data for a least meansquares algorithm. To approximate lanes, the least mean squares algorithm can be used with one-degree polynomials and/or line of best fitting methodology. Alternately, or in addition, second-degree estimations can be used. Second degree estimations may capture curvature within lanes that the first degree cannot. However, it may take slightly longer to compute and/or be more computationally costly to compute. The first degree polynomials had a greater influence on the results than noise.

Referring to FIG. 16 illustrates an example of a process to find lanes. A modified image (a), may be used to create a Canny Edge image (b). This image may include portions, such as the VP or the like, that have been removed in order to create a masked Canny Edge image (c). You can also create a perspective image showing Canny lanes (d), or a perspective view of the color threshold image (e). Combining the results of (d), (e), may produce an image of color threshold images (e), and edge lanes (f). On lanes, a second-order best fitting lines process can be used as shown in (g). The (h) illustration shows the transformed computed lanes derived from the best fit lines in (g). An embodiment of the lane detection method described above may be capable to find multiple lanes. Ian embodiment may limit detection to two lanes closest the VP and on opposite sides. This would limit lane detection to those lanes most relevant for detection of potential hazards and generation directional alerts.

“Referring to FIG. “Referring again to FIG. 13, lane identification can be used for determining one or more areas. An area of 1308 that covers a lane occupied a vehicle with forward-facing camera 1304, may be considered a region of high significance. This is because alerts are more likely to be triggered quickly and/or with greater urgency. A further, non-limiting example is that one or more areas of secondary significance may be identified. For instance, an area of digital screen 1308 covering a vehicle with forward-facing camera 1304, for instance, may be identified as a region of high priority. Such an area may include a substantially centrally located trapezoid on the digital screen. Regions of secondary importance might have a higher threshold to trigger an alert, a lower or different degree of urgency or escalation and/or different trigger and/or esca/or triggering and/or other esca/or esca. Another non-limiting example is that one or more areas of low importance can be identified. This could include, without limitation, the regions to the right or left of the digital display screen. These regions may correspond to objects such as pedestrians on sidewalks or trees along the roadside. Regions of low importance might have a higher threshold to trigger an alert, a lower or different level of urgency and/or escalate, or different triggering//or escalation criteria.

“Referring to FIG. 13. System 1300 contains a motion detection analyser 1324 that is connected to the processing unit 1316. Any component or hardware module, and/or software module, may be included in the motion detection analyzer 1324. The motion detection analyzer 1324 can be configured to detect a rapid parameter switch on the digital display 1308, determine a screen position on the digital display 1308 of this rapid parameter change and retrieve from the screen to spatial location map 1320 a spatial location based upon the screen location. Finally, it generates the directional alert using the spatial location.

“With reference to FIG. 13. System 1300 can include any other element as described in this disclosure, whether it is included in any system or used in any manner. System 1300 could include one or more biosensors 1328 s. This may include any of the biosensors 1328 s described in FIGS. 1-12, which includes without limitation GSR and HRV, as well as other sensors. System 1300 could include at most an audio input device 1332. This may include any audio input devices 1332 as discussed above with reference to FIGS. 1-12, which includes microphones. The system may include at most a user-facing cam 1336. This could include any camera 1336 described above with reference to FIGS. 1-12; user-facing cam 1336 could include a camera mounted on a mobile device, such as a smartphone and/or cellphone, or even a?selfiecam.

“Referring to FIG. 17 illustrates an example of how artificial intelligence can be used to monitor, correct and evaluate user attentiveness. Step 1705 is when a motion detector analyzer 1324 operates on a processing device 1316 and captures, using forward-facing cameras 1304 and 1308, a video feed of the field of vision on a screen 1308. This may be implemented as described in FIGS. 1-13.”

“At step 1710 and continuing to refer FIG. 17 motion detection analyzer 1324 detects a rapid parameter shift on the digital screen 1308, and continues to refer to FIG. Rapid parameter refers to a change in one or more pixels which exceeds a threshold number of pixels experiencing the change per framerate and/or time. A non-limiting example of how to detect a rapid parameter change is to compare a first frame and a second frame in a video feed. This will allow you to determine if a threshold number has changed relative to at least one parameter between the first and the second frames. The first and second frames can be consecutive and/or separated by one or several intermediate frames. The frequency at which the motion detection analyzer 1324 samples frames can be used to determine likely degree of motion change may be chosen to capture any changes that a user might need to respond. For example, a sample rate could be used to collect frames enough often to detect motions of pedestrians, cyclists, vehicles, and animals. A machine-learning process may determine the frame rate. For example, if object analysis or classification has been done to identify objects in similar video streams, then motion and rates of pixel parameters changes in these video feeds can be correlated. This training data may be used to identify rates for pixel parameter change that are consistent with movement of classified objects. These rates can be used to determine a frame rate for the motion detection analyzer 1324. The rate of change consistent in object motion can also be used to determine a threshold level of pixel changes. For example, a threshold number pixels that have changed parameters. This may be used to detect rapid changes as described above. An embodiment of detection of rapid changes may be analogous to human perception of movement or light change in peripheral vision. This is enough for the human eye to scan in the direction of perceived change. Threshold levels may be used, for example, using machine-learning or deep learning and/or neural net process as described above. They can prevent small fluctuations of light or color from triggering alerts, as discussed in more detail below. However, fluctuations consistent with possible movement objects of concern could be detected and used as directional alerts.

“Still referring back to FIG. 17 Parameter changing in rapid parameter changes may include parameters that pixel might possess. The parameters to track and the changes to those parameters may be detected using machine-learning processes as described above to detect correlations between object motion and parameter changes. A parameter can include at least one color value. A parameter can also include an intensity value. This is a non-limiting example. A minimum of one parameter can include multiple parameters, including without limitation, a linear or any other combination derived using machine learning, deep learning and/or neural network processes, as well as the combination of several parameters.

“With reference to FIG. 17 parameters may be detected and/or compared to detect rapid parameter change. This could include parameters that describe multiple pixels, such parameters of geometric features on digital screens. Processing unit 1316 might use feature detection to detect rapid parameter changes. A static camera’s features may be able to move along epipolar lines, which intersect at the epipolar center. Processing unit 1316 can determine whether a shape having a feature-set is moving in excess of a threshold amount and/or in a direction that corresponds to intersection of an object with the vehicle or a vehicle’s path. The term “collision detection” may refer to the modification of a digital screen 1308 that corresponds to an object intersecting with vehicle or a vehicle’s path. Collision detection is a two-dimensional change on digital screen 1308 that corresponds with conditions for the generation of a direction alert. A motion vector may be used to track translational motion of a feature identified shape. It may contain an n-tuple number of values and may be stored in any data structure or data representation. This allows tracking motion of digital screen 1308. A resizing vector may also be stored in any data structure and/or representation. This allows for tracking the size change of a shape in digital screen 1308. Frame-to-frame comparisons and/or comparisons to thresholds may be used.

“In an embodiment and with continued reference at FIG. 17 may use a motion vector or resizing vector to calculate the time to collision TTC for an object based upon a parameter change. A number of parameter changes can be used to calculate numbers that indicate degrees of change. Processing unit 1316 might then weigh features with scores that correspond to the calculated numbers. A region of interest, or a region that contains matching features, can be broken down into multiple squares. The median score of these features will then be used to determine the score for the area. FIG. FIG. 6 shows a flow diagram that illustrates an exemplary embodiment for a process flow to match feature and detect parameter changes.

“Still referring back to FIG. 17 may still be referred to as FIG. You can use any feature matcher and extractor. A Binary Robust Invariant Scalable Keypoints (BRISK), detector and matcher can be used. This algorithm may require inspection of a predetermined number of feature sets, ns. An embodiment may give each feature a weighted score based on its motion along the digital screen 1308. This could include their position relative to a top of the screen and their magnitude. One or more of the geometric models illustrated in FIGS. may be used to determine features’ weighted scores. 19A-B, both models can be expressed using the following equation:

“W i = ? f i , t , f i , t – 1 ? D m * ( sin ? ( ? ) + L? ( f.i, t)? * 1 2nWhere fi.t and fi.t-1 are coordinates of matched feature in subsequent frames, Dm represents the maximum distance feature can be separated in an image. The directional angle of vector between fi.t and fi.t-1 is called L( ). This score can be normalized depending on how close the feature is to the bottom. There are many ways to implement L( ). One way was to use an exponential function for the feature’s y-axis coordinate. A linear function can be used in the mobile app for faster computation. How are the two models different? How???? is calculated. FIG. 19A calculates? relative to the horizontal. The closer the sin( ), function should be to one, the more vertically arranged the vectors between the features. The concept suggests that features moving vertically down a screen might be given more weight. This model might also prioritize features that move towards the bottom of the screen. This is indicative of a location close to the driver’s chair. If you only look for vertical objects, it may be difficult to see adjacent objects that are often nearby. These objects are frequently and often innocuous. FIG. 19B. Items are moved directly into the cars lanes and given heavier weights. This is the new? This new? can be calculated by finding the vector between fi.t-1 and Cb, the center bottom point. You can find the perpendicular vector for fi,t-1Cb. The perpendicular vector of fi,t-1Cb may be found;? All angles above the horizontal are exempted in the first method. The latter method may exclude angles that extend beyond the perpendicular vector.

Referring to FIG. 20. A two-dimensional Gaussian kernel can be placed at fi,t in each pair of matched feature pairs. The kernel can be initially composed of values between 0 to 255. The kernel may be divided by the number of feature sets used, and multiplied with the weight computed using FIGS. 19A-B above. This equation may be used to express it:

Click here to view the patent on Google Patents.