Invented by Gabriele Zijderveld, Rana el Kaliouby, Abdelrahman N Mahmoud, Seyedmohammad Mavadati, Affectiva Inc

The Market for Vehicle Manipulation by Occupant Image Analysis

Vehicle manipulation can be used to regulate various activities within a vehicle, such as setting departure time, selecting music that everyone enjoys in the car, setting climate zones and more.

Image analysis techniques can be employed to collect cognitive state data, such as facial images, on an occupant of a vehicle. This is accomplished using multiple cameras, microphones and other imaging and audio capture devices.

What is the market for Vehicle Manipulation by Occupant Image Analysis?

The market for vehicle manipulation by occupant image analysis is vast, encompassing everything from gimmicks to genuine game changers. Particularly, this technology represents an exciting new class of technologies that can be applied to existing fleets of vehicles to improve safety and quality of life. Most importantly, this nonintrusive and odourless element that cannot be seen by human eyes opens the doors to a whole new era in mobility that could well prove monumental in decades ahead – including driverless vehicles as well as improved use of passengers in cars, trucks or trains.

What are the main applications of Vehicle Manipulation by Occupant Image Analysis?

Occupant image analysis is a technique that can be applied to vehicles and their human occupants. It helps the driver assist their passengers by suggesting routes and travel modes, improving comfort levels while in transit, decreasing stress and other negative cognitive states, as well as other benefits.

One or more mobile devices 1500 can be used to collect video data on a user 1510. The captured video data includes facial features like expressions, iris changes and eye movements which can be extracted and analyzed to detect cognitive states, moods, mental states and emotional states.

Another mobile device 151 may be utilized to capture audio data from a second occupant 150 while they are in another vehicle. This audio data could include ambient room sounds, physiological sounds like breathing or coughing, noises made by the occupant, voices, and more.

A third mobile device 152 can be utilized to glean a cognitive state profile 150 from each occupant in the first and second vehicles, then compare further cognitive state data with this profile while they are in the second vehicle. This further cognitive state data could include face data, voice data, audio data, and more.

The cognitive state profile can be helpful to the occupant when selecting a route, selecting music that everyone enjoys in the car, controlling climate zones inside, etc. Additionally, it helps detect sleepy, impaired, or inattentive drivers so they aren’t driving the vehicle.

Furthermore, the cognitive state profile can be used to detect whether an occupant is likely to experience anxiety or sadness while in a vehicle. It could identify different cognitive states like happiness or mirth for them and even enhance an existing learned cognitive state profile.

Occupant image analysis is an emerging technology that can be applied to vehicles and human users within them. It has the potential to enhance road safety by identifying an occupant, assist them with choosing routes and travel modes, improve comfort while driving, reduce stress or other negative cognitive states, and more.

What are the main components of Vehicle Manipulation by Occupant Image Analysis?

Vehicle manipulation by occupant image analysis involves collecting video data 1510 and audio from mobile devices, vehicles, and locations 1500. The collected video data can be used to detect facial expressions, action units, gestures, mental states, cognitive states, physiological data – the list goes on!

Video data can be analyzed to detect various facial expressions, such as a smile, frown, smirk and more. This analysis may take place across one or more video frames and utilize classifiers, algorithms, heuristics codes or procedures among others.

For instance, facial expression processing can be automated using machine learning techniques like neural networks or other types of computer vision technology. To train this machine learning algorithm, an example image sample known as a “known good” image should be used, and then its resulting classifier applied to other images for analysis.

Vehicle manipulation through occupant image analysis includes the processing of physiological data such as heart rate, heart rate variability, respiration rate and perspiration temperature. These indicators can be derived from video and image data collected remotely via camera or other devices without needing to contact vehicle occupants.

This can be tailored to a single vehicle or multiple vehicles, depending on the make and model, type of tire, weather conditions, traffic patterns, etc.

The process can be tailored according to an occupant’s preferences, mental state, mood and emotion, as well as default settings on mobile devices and within the vehicle itself. Audio stimuli in the car may also be altered based on information such as driving habits, traffic conditions or road closures.

Facial data processing can be done with one or more AUs, which are groups of coding codes that identify various facial expressions and behaviors like eye movement, head motions, brow furrows, squints, lowered eyebrows, raised eyebrows and attention. These codes can then be classified into intensity scoring from A (trace) to E (maximum).

What are the main technologies used in Vehicle Manipulation by Occupant Image Analysis?

Vehicle manipulation by occupant image analysis typically uses facial recognition 130, classifiers 140 and weights 135. Facial recognition is the process of recognizing a face using images processed through a deep neural network. The neural network includes multiple hidden layers 1440 and an image classification layer 1450 that detects points, edges, objects boundaries within an image.

Classifiers are artificial intelligence systems that enable computers to make decisions based on an array of input data. They usually consist of a series of rules or algorithms which identify which observations, samples, and other items fit into certain categories.

These rules can be learned through training, where data from previously learned sets of rules are analyzed to create new ones. These rules could take the form of heuristics, algorithms or other types of code.

In some cases, the rule is based on an identity 126 that can be recognized through facial recognition, voice print recognition, userid entry, mobile phone recognition and key fob recognition. This identification may come from a camera within the vehicle or other sensors such as microphones for collecting voice data or audio data.

A cognitive state profile 120 can be created based on facial data and other collected cognitive state information. This profile may include details about an occupant’s schedule, preferences, user identification (ID), etc..

The cognitive state profile for an occupant can include information about absolute time, such as the day, week, month or year. This information can be helpful in determining if they feel tired, invigorated or agitated at their current travel destination and for planning how best to assist them during their trip.

Further cognitive state data can be captured on an occupant while they are in a second vehicle, and this further data can be compared to their pre-learned cognitive state profile. This comparison can be done on the same computing device used for learning their profile or it can be done separately if both vehicles are different types.

The Affectiva Inc invention works as follows

Vehicle manipulation can be done using occupant image analysis. To collect cognitive data on the occupants of vehicles, a camera is installed in the vehicle. Based on the cognitive data, a cognitive profile for an occupant is created. Information about absolute time is included in the cognitive state profile. Information about trip duration time is included in the cognitive state profile. The cognitive state data is enhanced with voice data. Voice data is also collected. On a second computing device, additional cognitive state data is collected on the occupant in another vehicle. On a third computing device, the additional cognitive state data is compared with the cognitive profile that was created for the occupant. Based on the comparison of the additional cognitive state data, the second vehicle can be manipulated.

Background for Vehicle manipulation by occupant image analysis

People travel for many reasons. Moving one or more people to one place can be done for financial reasons, such as to commute to work or school. Or, it can be done for pleasure, relaxation, discovery, exercise, or just for fun. Other sinister circumstances such as war, famine or displacement can also cause travel. People choose the best mode of transportation depending on their purpose and available transportation options. There are three types of transportation: ground transportation, water transport, and air transportation. Ground transportation can be done on foot, by horse, or by vehicle like a bicycle or automobile. Water transport can be done using a personal vehicle, such as a kayak, canoe or raft, as well as a public vehicle, such as a ferry, ship or ferry. An airplane or an airship can be used for air transportation. Whatever mode of transport a person chooses, it is most likely that they will use a vehicle.

People spend a lot of time in cars. Vehicle travel takes up a lot of our time. The daily commute, taking your kids to sports practices, lessons in musical instruments, debate club, and to the vet clinic, shopping for food or household goods, traveling, and any other activities that require transportation are all examples of vehicle-related travel events. It can be frustrating, time-consuming, frustrating, and annoying to travel in a car. Automobile transportation is complicated by traffic jams, accidents and poor roads. Transport difficulties are made worse by unfamiliar vehicles, unfamiliar cities, and having to remember to turn on the opposite side of a road when you travel in certain countries or in construction zones. These transportation realities can lead to catastrophic results. Road rage can be experienced by motorists who are upset and can also cause injury to pedestrians, cyclists, animals and property.

Vehicular manipulation is based on occupant image analysis. You can manipulate an autonomous vehicle or semi-autonomous car. A vehicle camera in-vehicle is used to gather cognitive state data from the occupant. An occupant could be either the vehicle’s operator or a passenger. Cognitive state data may include facial data, image data, and so on. Other sensors in vehicles include a microphone to record voice data and audio data as well as other sensors that collect physiological data. Cognitive state data can be collected from either the driver or passenger of a vehicle. It can be a vehicle that is first, second, or public transport vehicle. One or more cameras, or another type of image capture device, can capture the image data and facial data. A number of cognitive state profiles can be learned about the vehicle’s occupant. Based on the cognitive data, one or more cognitive profile profiles can be created. Cognitive state profiles can include cognitive states, cognitive states, emotional states and moods. Additional cognitive state data can be gathered from the occupant. Further cognitive state data can also be collected from the occupant while they are in another vehicle. It can be the same vehicle as the first vehicle, or a different vehicle from a fleet. Further cognitive state data can be compared to the cognitive profile generated for the occupant. Comparing the additional cognitive state data may include identifying an occupant of the second car, comparing the cognitive data from the second car with the cognitive data from the second car, and so forth. Based on further cognitive state data, the second vehicle can be manipulated. The second vehicle manipulation can be performed in the same way as the first vehicle manipulation. It can be modified to fit a particular make or class of second vehicle.

In some embodiments, a computer-implemented vehicle manipulation method includes: collecting cognitive data, including facial data, from an occupant of a vehicle; creating a cognitive profile for that occupant using the cognitive data; recording further cognitive data on that occupant while they are in another vehicle; comparing the additional cognitive data with the cognitive profile that was created for the person; and manipulating the vehicle based upon the comparison of the additional cognitive data. Some embodiments include the collection of voice data and the enhancement of the cognitive state data with that voice data. An occupant may be a passenger in the vehicle. It can be either an autonomous vehicle, or semi-autonomous. The method can be used to analyze the cognitive state profiles across multiple vehicles. The manipulation can include locking the vehicle out of operation; recommending a break; recommending another route; responding to traffic; setting up seats, mirrors and climate control. Other embodiments of manipulating include a make for second vehicle, a vehicle type for second vehicle and tires for second vehicle.

The following description will help you to see the many features, aspects and benefits of various embodiments.

Individuals could spend hundreds of hours per year travelling in vehicles like buses, trains and airplanes. Vehicles are used for transportation, running errands, and traveling. If the vehicle’s occupant isn’t primarily responsible for operating it, such as a train, self-driving automobile or autonomous train, they can spend their time enjoying entertainment options inside the vehicle. There are many entertainment options available, including movies, games, video, and/or telephone calls. The vehicle ride experience can also influence entertainment options. A high-speed ride with lots of stop-and-go traffic can be distracting for some people. For other occupants, however, a serious, intense movie might be the best option to distract from a nerve-wracking experience. The cognitive state data of occupants can play a crucial role in optimizing vehicle operation and entertainment experiences. This is particularly important for autonomous, partially- or fully-autonomous vehicle travel.

A wide range of cognitive state data can be displayed by an individual while they are traveling in or atop a vehicle. Cognitive state data could include facial, voice, image, audio, physiological, and other data. The collected cognitive data can be used to create a cognitive state profile. A cognitive profile can be used for identification of an individual occupant in a vehicle and to determine their cognitive state. For validation and verification purposes, as well as for configuring the vehicle, the identification of the occupant is useful. It can be either an autonomous or semi-autonomous vehicle. An assessment of cognitive state can help determine whether an occupant should operate the vehicle or take a break, change routes, etc. This can improve road safety and enhance the transport experience for the occupants of the vehicle. The ability to collect cognitive state data about passengers and vehicle operators allows for adaptation of vehicle operating characteristics as well as vehicle environmental experiences.

Cognitive data can be collected from an individual. This data could include facial data and voice data as well as physiological data. To understand the cognitive state of an individual, such data can also be used to determine their emotional, mental, and mood states. To create a cognitive profile of an individual, one can collect cognitive state data from him. Information about the individual, such as their preferences for vehicle types, settings and preferences, can be included in the cognitive state profile. Further data can be gathered by learning about the individual’s cognitive state profile and then compared with the cognitive profile. This can be used to manipulate a second vehicle. You can manipulate either semi-autonomous or autonomous vehicles. A semi-autonomous or autonomous vehicle can be controlled to reduce the time it takes to set up and operate. It also allows you to verify that you are in a cognitive state to operate the vehicle. ; improving safety on the roads; and enhancing the user’s transportation experience. An enhanced transportation experience means that the individual can operate their vehicle independently, with security and comfort. Road safety improvements are achieved by assisting the individual in navigating in unfamiliar surroundings or operating a vehicle in unfamiliar terrain. They also prevent a sleepy, impaired or inattentive person from operating the vehicle.

The disclosed techniques allow vehicles, including semi-autonomous and autonomous vehicles, to be controlled. You can manipulate the vehicles for many purposes, including helping an occupant to get around, selecting routes, increasing comfort, stress reduction, and other positive cognitive states. Vehicle manipulation is based on occupant image analysis. The vehicle’s camera is used to collect cognitive state data (including facial data) on the occupant. A camera may include a video camera or still camera. It can also include a camera array, a camera array and a plenoptic lens. Based on the cognitive data, a cognitive state profile can be created for the occupant. Information on absolute time is possible to be included in the cognitive state profile. Absolute time can include day of week, day-of-month, year information and so forth. Additional cognitive state data is collected while the occupant occupies a second vehicle. It can be the same vehicle or a different vehicle in a fleet. Or it can be a vehicle different based on make/class. You can also use weather patterns or traffic patterns to calculate the cognitive data. The additional cognitive state data can be compared to the cognitive profile that was created for the occupant. Comparing the additional cognitive state data and the cognitive profile can reveal differences in how the occupant interacts with the second car. There may be differences in cognitive state. Cognitive state data can be used to detect sadness, anger, frustrations, confusions, disappointments, hesitations, and other cognitive disorders. Based on further cognitive state data, the second vehicle can be manipulated. The manipulation of the second vehicle may include locking the vehicle out, recommending a break; recommending another route; responding to traffic; setting up seats, mirrors and climate control. You can use a make and model for the second car, a vehicle class, tires, weather patterns, and traffic patterns to manipulate the vehicle.

FIG. “FIG.1” is a flow chart for vehicle manipulation using occupant imaging analysis. Based on the collected cognitive state data, a person’s cognitive state profile can be created. Additional data is collected to compare the cognitive profile. This data is used to manipulate a vehicle. The vehicle’s camera is used to collect cognitive state data, including facial data, on the occupant. Based on the cognitive data, a cognitive profile of the occupant is created. While the occupant is riding in a second vehicle, additional cognitive state data is collected. This cognitive profile is used to compare the additional cognitive state data with the one that was previously learned about the occupant. Based on the comparison of the additional cognitive state data, the second vehicle can be manipulated. Manipulation can be limited to monitoring an occupant of a vehicle or driver. The flow 100 also includes the collection of cognitive state data 110. A camera 112 can be used to collect cognitive state data within a vehicle. It can also include images of the occupant. Cognitive state data may include facial data 114 about an occupant of a vehicle, using images of that occupant. The camera can be connected to an electronic device, vehicle, or other means that allow for interaction between the individual or group. Multiple cameras can be used to obtain a series images. A webcam is a camera that includes a variety of cameras. These include a still camera and a video camera. Also, it can contain a thermal camera, CCD device, a smartphone camera, three-dimensional cameras, depth cameras, plenoptic cameras, and any other apparatus that allows data to be analysed in an electronic system. The camera can also be used to capture image information, which includes the vehicle’s occupant. Other occupants can be included in the image data. In some embodiments, the passenger can also be an occupant of the vehicle. Image data may include facial data. The facial data will include the vehicle’s occupant. In certain embodiments, manipulating the vehicle may include the capture of cognitive state data on a second person. This can include manipulating a vehicle using the cognitive data for the occupant as well as the cognitive data for that second person.

The flow 100 could include the collection of voice data 116 and the enhancement of the cognitive state data with voice data. A microphone, an audio transducer or another type of apparatus that captures audio data, such as voice, can be used to collect voice data. Audio data can be included in the voice data. Ambient noises such as road noises, or interior sounds can all be part of the audio data. Some embodiments of cognitive state data only include audio and voice data, but not facial data. Audio data can be evaluated according to the expected vehicle interior or exterior noise. It is possible to cancel out noise. It is possible to identify the occupants of a vehicle and locate their voices for further analysis. This audio cognitive state data can then be used to manipulate the vehicle. Other embodiments allow for the audio cognitive data to be combined with other types cognitive state data. Multiple modalities can be combined of cognitive state data. Some embodiments allow for augmenting based on lexical analysis. This allows for sentiment evaluation. Sentiment may include subjective information and affective states. To determine the attitude of a vehicle operator towards a vehicle, travel, and conditions, one can analyze sentiment. A lexical analysis can help determine cognitive state, mental status, emotional state, mood, etc. Cognitive state data and augmenting voice data can detect drowsiness or distraction, sadness or stress, joy, anger, frustrations, confusion, disappointments, hesitation, cognitive overload. The voice data may include non-speech vocalizations in certain embodiments. Non-speech vocalizations may include sounds made by an occupant of the vehicle. The non-speech vocalizations can include grunts and squeals as well as snoring, laughing, filled pauses or unfilled pauses. The flow 100 evaluates the voice data 118 for pitch, loudness and language content. In determining the cognitive state of an occupant of a vehicle, the evaluation of voice data can be used. Voice data can be converted into text and can be analysed.

The flow 100 involves learning, on a primary computing device, a cognitive profile 120 for an occupant based upon the cognitive state data. An embodiment recognizes the identity of the occupant. An identity can be identified using facial recognition, voiceprint recognition, voice recognition, userid entry and mobile phone recognition. Key fob recognition or recognition of another electronic signature may also be used. Information about the vehicle’s occupant, such as schedules, preferences, and an identification (ID), can be included in the cognitive state profile. Information about trip duration can be included in the cognitive state profile. Trip duration time information can include information about the typical time, expected times, increased time due traffic conditions, weather conditions, etc. In some embodiments, the profile may include information about absolute time. It can be used to determine whether travel conditions, such as rush hour, can make travel difficult for an occupant. Also, it can be used to determine whether the occupant prefers a warmer or colder environment in the vehicle. Absolute time information can include the time of day, day or week, month or year. Facial recognition 130 is part of the flow 100. Classifiers can be used to recognize facial features on an occupant. A deep neural network can also use the classifiers to determine weights. The facial recognition of an occupant can be done by identifying facial landmarks and regions, as well as distinguishing facial characteristics like scars, moles or facial jewelry. Embodiments can be based on facial recognition and include the use of the cognitive state profile across a fleet of 132 vehicles. Vehicle manipulation can be made possible by using the cognitive profile and facial recognition of the driver. The cognitive state profile can be based on cognitive event temporal signatures in certain embodiments. Temporal signatures can include rise time and duration, fall times, etc. These can be used to determine the duration of a cognition state or its intensity.

The flow 100 also includes the capture of further cognitive state data 140 while the occupant is in another vehicle 142. Further cognitive state data can be gathered on the occupant to improve the learned cognitive profile and to identify the occupant. You can choose from a wide range of vehicles to be the second vehicle 142, including cars, trucks, buses and sport utility vehicles (SUV), special vehicles, motorbikes, scooters, mopeds and boats. In some embodiments, both the vehicle that can be controlled and the second vehicle can be used as one vehicle. As we have discussed previously, cognitive state data can also be collected at different times of the day, on different days and in different seasons. The vehicle and the second vehicle may be different vehicles in some embodiments. Other embodiments allow the vehicle and second vehicle to be part of a larger fleet. One or more vehicles, the second vehicle and the entire fleet can be autonomous or semi-autonomous vehicles. The flow 100 also includes the capture of cognitive state data about a second occupant (144) and manipulating second vehicle (146) based on the cognitive data for the occupant as well as the second occupant’s cognitive state data. The second occupant’s cognitive state data can be used to select a route based upon their preferences, choose mutually agreeable music, control climate zones, and so forth.

The flow 100 involves comparing the additional cognitive state data with 150 of the cognitive profile 150 that were learned for the occupant on a third computing device. In some embodiments, the second and third computing devices can be the same. Further cognitive data may include facial data and voice data as well as audio data, physiological data, etc. The occupant can be identified by comparing the additional cognitive state data to the cognitive profile. This allows the occupant to create a cognitive profile and identify their cognitive state. In some embodiments, the cognitive state data of the occupant can also be captured by a passenger in the vehicle. The vehicle can have more than one passenger. A vehicle can be autonomous, such as a self driving automobile or autonomous truck. Semi-autonomous vehicles are possible in certain embodiments. Semi-autonomous vehicles include self-parking cars and collision avoidance signals like alarms and shaking seats.

Based on further cognitive state data, “Flow 100” includes manipulating the second car 160. You can manipulate the second vehicle by setting it up for an identified occupant or loading a cognitive state profile. Also, you can operate the vehicle in a way that is most comfortable or preferable to the occupant. The manipulating can include a locking operation, recommending a break to the occupant, recommending another route, responding to traffic, adjusting seats and mirrors, climate control. These manipulations can be done for safety, convenience, comfort, and so on. Other embodiments of manipulating the vehicle are based on a make and vehicle class for the second car, tires for the second car, weather patterns, and traffic patterns. Based on the features and equipment of the second vehicle, modifications can be made to the manipulating of the second car. You can choose routes based on suspension, steering stability, and tires. You can change the order of the steps in the flow 100. Many embodiments of the flow 100 may be included in a computer software product that is embodied on a non-transitory computer-readable medium. This medium can include code executable by one to more processors.

FIG. “FIG.2.2” is a flow chart for vehicle manipulation. Vehicle manipulation is based on occupant image analysis. The vehicle’s camera is used to collect cognitive state data (including facial data) on the occupant. Based on the cognitive data, a cognitive profile of the occupant is created. While the occupant is riding in a second vehicle, additional cognitive state data is collected. This cognitive profile is then compared to the additional cognitive state data. Based on the comparison, the second vehicle is controlled. The flow 200 involves manipulating the second vehicle, 210 using the additional cognitive state data. As discussed below, manipulating can control different actions of a vehicle. A vehicle could be a standard vehicle or a semi-autonomous, autonomous, or regular vehicle. One embodiment of manipulating includes manipulating a primary vehicle.

Flow 200 includes a locking-out operation 220. A locking out operation may allow for selectively enabling the use of a vehicle. The selective enabling can also include the identification of the vehicle’s occupants. The locking out operation may include disabling and enabling features of the vehicle and preventing it from being used. The flow 200 also recommends a break 222 to the occupant. Elapsed travel time, vehicle operation times, cognitive state, boredom, and other factors can all be used to recommend a break. A recommendation to take a break could include a recommendation for a short rest, a stop for food, or both. A flow 200 also includes the recommendation of a different route 224. A different route may be recommended for traffic conditions, weather conditions or cognitive issues. The flow 200 also includes the recommendation of how far to drive 226. Elapsed driving time, difficulty of a route, boredom and inattentiveness, anxiety, etc. can all be used to recommend how far to drive. Responding to traffic 228 can be part of the flow 200. Manipulation can be used to direct the vehicle to a lower traffic area, delay departure times or schedule travel times outside of rush hour, or reroute the vehicle in the event of an accident.

The flow 200 involves manipulating the vehicle to meet the needs and preferences of the occupant. The flow 200 also includes the adjustment of seats 230. The type of vehicle and the preferences of the occupant will all play a role in the adjustment of the seats. Adjusting the seats may include moving the seat forward or backwards, tilting the seat, and adjusting the temperature. The flow 200 also includes the adjustment mirrors 232. You can adjust the mirrors of your vehicle based on who is in the vehicle at any given time, whether it’s daytime or night, and how heavy or light traffic you are, etc. Climate control 234 is included in the flow 200. Climate control 234. The vehicle’s climate can be adjusted based on whether the occupant is daytime or nighttime and season (e.g. Heat or air conditioning, and so forth. Climate control may include the ability to adjust interior temperature 236 for another vehicle. You can adjust the interior temperature based on your preferences, vehicle type, and other factors. You can also manipulate zones inside the vehicle to adjust the interior temperature. Lighting 238 is part of flow 200. Lighting can be controlled by changing the color temperature or light level. Audio stimuli 240 are included in the flow 200. Audio stimuli may include alerts and warnings, signals, tones, or other information. Based on the cognitive profile of the vehicle’s occupant, the audio stimuli can also be controlled. The flow 200 also includes manipulating music 242 in the vehicle. You can manipulate music based on the default settings, preferences, cognitive state, mood and emotion of the occupants, as well as the vehicle’s cognitive state.

The flow 200 includes 250 brake activation for the vehicle. Brake activation may include speed control, slowing, stopping, emergency stopping and other functions. Throttle activation can be included in some embodiments. Throttle activation may include speed control, compensating hills, acceleration, decelerating and so on. The flow 200 also includes 252 steering control. The steering control is used to manipulate the vehicle for following a route or changing lanes, turning, taking evasive actions, and so forth. In some embodiments, the throttle activation, brake activation and steering control can all be controlled for either an autonomous or semi-autonomous vehicle. You can use the brake activation and throttle activation or steering control for emergency braking, collision avoidance, emergency maneuvers, and other purposes.

Vehicle manipulation is possible for one vehicle, two vehicles, multiple vehicles, etc. where vehicle manipulation can be based upon a cognitive profile of an occupant of a vehicle. The cognitive profile can be applied to a number of vehicles. A fleet of vehicles could include cars, trucks, SUVs, buses, special vehicles, and so forth. One vehicle, or the first vehicle, can be used as a vehicle. Multiple operators can operate the same vehicle. The vehicle and the second vehicle can be operated by multiple operators. The vehicle and second vehicle may be the same vehicle type, or the same vehicle model. A fleet can include both the vehicle and the second car. The flow 200 also includes a vehicle made 260. In some embodiments, vehicles from the same manufacturer may have been used to create the entire fleet. The manipulation of the vehicles can be done through an electrical interface, an application programming interface (API), or a combination of both. The flow 200 also includes a vehicle class 262. A compact vehicle, a medium-sized vehicle or a large vehicle are all possible classes of vehicle. A vehicle class can also include a van or truck, a bus, motorcycle, and so on. The type of tires 264 is included in the flow 200 for vehicle manipulation. In order to determine vehicle speed, braking, acceleration, choice of route and so forth, it is possible to include the type of tire (e.g. an all-weather tire or a winter tire, or an off-road tire). A weather pattern 266 is also included in the flow 200. This weather pattern can be used for vehicle departure times, route choice to avoid severe weather, route choice based on altitude or latitude, and so forth. A traffic pattern 268 is included in the flow 200. Traffic patterns can be used to manipulate vehicles, weather and other factors. For example, traffic patterns can be used to determine departure time and choose a route. You can manipulate the second vehicle based on your cognitive state profile as well as any of the other aspects discussed herein. The manipulation can be based on the make of the second vehicle and the vehicle class, as well as the tires and traffic patterns. The flow 200 can be modified to change the order of the steps, repeat them, omit them, or any other similar arrangement. Many embodiments of flow 200 may be included in computer program products embodied on a non-transitory computer-readable medium that contains code executable by one to many processors.

FIG. 3. This is a flowchart to select cognitive state profiles. Cognitive state profiles can easily be learned using cognitive state data. You can choose from many cognitive state profiles. A cognitive profile is created by analyzing facial data and cognitive state data on the occupants of vehicles. Additional cognitive state data is collected and compared to the cognitive profile. Based on the comparison of further cognitive state data, a vehicle can be manipulated. An occupant image analysis can be used to manipulate a vehicle, such as a second vehicle. Additional cognitive state can also be collected from other passengers. This means that other occupants may be present in the same vehicle or in different vehicles. The flow 300 also includes learning additional cognitive profile profiles from additional vehicle occupants. Learning additional cognitive profile profiles can be done by collecting cognitive data, such as voice, facial, and audio data. Cognitive state data may include physiological data like heart rate, heart rate variability, acceleration data, and the like.

Click here to view the patent on Google Patents.