Autonomous Vehicles – Shadi Mere, Theodore Charles Wingrove, Michael Eichbrecht, Kyle Entsminger, Visteon Global Technologies Inc

Abstract for “Autonomous driving interface”

The present disclosure provides an autonomous vehicle interface (AVI) for a vehicle. It also demonstrates a method for enabling and using an autonomous vehicle interface to a vehicle. The autonomous vehicle interface can be used to seamlessly switch driving modes and the vehicle’s operation from user-controlled to autonomous controlled driving. This is possible without the user having to activate the autonomous controlled driving. In order to activate the autonomous control, the interface for autonomous vehicles may be able to take into account active components and environmental conditions. An autonomous vehicle interface could also be set up to evaluate the user’s operational status and/or attentiveness in order to activate the autonomous driving mode. The autonomous vehicle interface can also be programmed to switch the mode of operation from autonomous controlled to user-controlled driving if there is a positive change in the user’s operational status and/or attentiveness.

Background for “Autonomous driving interface”

“Semi-autonomous, autonomous vehicles such as robots or driverless vehicles are in development.” These vehicles can sense and navigate in the environment around them without the need for human input. There are many elements that can be used to make the vehicles work, including radar tracking, a global positioning system (GPS), computer vision, and others. The vehicles can also be used to map and update the vehicle’s driving routes based on any conditions or changes encountered during operation.

The vehicle will notify the user via audio or visual to request that they activate or switch to autonomous mode. The vehicle can also notify the user by performing a mechanical alert such as jarring or vibration of their seat. To return to regular driving, the autonomous mode must be deactivated once the user has activated it.

“However there are many problems with the semi-autonomous or autonomous vehicles at present. The first is the speed at which audio, visual, and mechanical alerts are displayed. This can cause the user to become anxious and jerk the steering column or create unsafe driving conditions. The semi-autonomous or autonomous modes require that the user activate the mode. This can be difficult, or even impossible, depending on the user?s current state. The autonomous modes may not work if the driver falls asleep behind the wheel. The transition from driving mode into autonomous mode is not seamless. The semi-autonomous or autonomous vehicles of today do not consider all circumstances experienced by drivers before notifying them to change into autonomous mode.

“The disclosed aspects include a system that provides an autonomous vehicle interface for vehicles. The system comprises an autonomous activation device that switches between an autonomous vehicle and user-controlled mode. A component interface is configured to interface with the vehicle’s component module. In response to the component interface sensing a stimulus independent of the component module, the component interface triggers the autonomous activation interface to switch from the user-controlled mode to the autonomous.

“The component interface could be configured to pair to a mobile device detector device and, in response to the user engaging the mobile device, initiate the switch to the autonome mode.”

“In another example, the interface could be configured to couple with a global positioning satellite device (GPS), and respond to the user engaging the GPS device, initiate the switch to the autonomic mode.”

“In another example, a component interface is configured for coupling to an infotainment gadget, and in response the user engaging the infotainment devices, initiating the switch to autonomous mode.”

“Another example is the engagement of the vehicle?s engine independent user control.

“In another example, an engagement of the vehicle?s steering independent user control defines the autonomous mode.”

“Another aspect is disclosed herein and includes an autonomous activation interfacing that switches between an autonomous vehicle and a user controlled vehicle modes; and a gaze track interface configured for interface with a gaze-tracking device and, in response to the gaze tracking sensing a particular event, trigger the autonomous activation interfacing to switch from the user-controlled mode to the autonomous mode.”

“In another example the user can define the event by turning his eyes away from the front of the vehicle.”

“In another example the specific event can be defined by the user closing their eyes.”

“Another example is the engagement of the vehicle?s engine independent user control.

“In another example, an engagement of the vehicle?s steering independent user control defines the autonomous mode.”

“Another aspect is disclosed herein and includes an autonomous activation interface that switches between an autonomous vehicle and a user controlled vehicle modes; and a vehicular sensors interface that interfaces with a sensor associated to the vehicle. In response to the vehicle sensor interaction sensing a predetermined event independent of a user activation, the autonomous activation interfacing triggers the switch to the autonomous mode from the user-controlled mode.

“Another example is a weather sensor.”

“Another example is a lane detection sensor.

“A road condition sensor” is another example of a sensor.

“Detailed examples of this disclosure are given herein. However, it is to understand that the disclosed examples are only exemplary and could be incorporated in other and alternate forms. These examples are not meant to illustrate or describe all forms of disclosure. The specification is a description of rather than a limitation. It is possible to make changes without affecting the scope and spirit of the disclosure. Many features of the disclosure, as illustrated and described in any of the Figures, can be combined with other features in one or more Figures to create examples that are not explicitly described or illustrated. These examples are representative of typical applications. For specific applications or implementations, it may be desirable to combine and modify the features in accordance with the teachings of this disclosure.

“An autonomous vehicle interface that doesn’t startle the user. It seamlessly transitions from a nonautonomous controlled driving to an autonomous driving without user activation and takes into consideration the user’s state and surrounding environment conditions while driving.

The present disclosure provides an autonomous vehicle interface (AVI) for a vehicle and a method for enabling and using an autonomous vehicle interface. The interface for an autonomous vehicle may seamlessly switch between driving modes, or the vehicle’s operation mode from user-controlled to autonomous controlled driving. This is without the user having to activate the autonomous control. The vehicle can seamlessly switch between driving modes, switching between autonomous controlled driving levels based on the user’s operational status. An autonomous vehicle can be programmed to associate the user’s operating state with the environment and to activate a level autonomous controlled driving. An autonomous vehicle interface can be further configured to activate autonomous controlled driving depending on the user’s operational status. User’s operational state could be defined as their level of attention or distraction. An autonomous vehicle interface could also be set up to take into account active components used by the user when activating autonomous controlled driving. Active components could include vehicle components that have been turned on or are operating, such as a vehicle radio. The autonomous vehicle interface can also be set up to control the mode of operation of the vehicle from autonomous controlled to user-controlled driving in response to changes in the operational state environment conditions.

“FIG. “FIG. An autonomous vehicle interface 10 can interface with a vehicle cockpit 12 to communicate with components and electronics in the vehicle cockpit. An autonomous vehicle interface 10 could also interface with the vehicle’s engine control block 14, which allows it to control and communicate with the engine and steering of the vehicle in response to autonomous controlled driving.

“The component interface 16 of the autonomous vehicle interface 10 can be connected to one or more components within the vehicle cockpit bloc 18. The component interface 16 can be used to receive or detect data about active and inactive components within vehicle cockpit block 12. These components could include an entertainment system or display, a global positioning system (GPS), communications module, and others. Active components can be any components that are being used by the user or that are turning ON. The radio playing, GPS directions displayed or phone communication with another person through the vehicle could be examples of active components. Inactive components are components that aren’t being used or experienced at the time.

“A gaze tracking interface 20 may be included in the autonomous vehicle interface 10. A gaze tracking interface 20 could interface with a gazing block 22 in the vehicle cockpit block 12. Gaze tracking interface 20 can include sensors or cameras or any combination thereof to detect the user’s head or eye position while driving. The sensors and cameras may be found above the steering wheel in the vehicle cockpit. The sensors and/or cameras may also be mounted on the steering wheels or on a section of the dashboard behind it. The interface 20 can be set up to detect the user?s gaze relative the road, specifically the position of the user’s head or eye relative to that road. The gaze tracking interface 20 can also detect if the user is focusing on components in the vehicle cockpit relative the road. The cameras and/or sensors could detect if the user’s gaze is on another component or mobile device within the vehicle.

“The autonomous vehicle interface 10, may also include a 24 lane detection interface. The vehicle cockpit block 14 may have a lane detection interface 24. One or more cameras, sensors or a combination of both may be included in the lane detection interface 24, which can include one or several sensors. To detect the vehicle’s location within a given lane, the sensors and cameras may be placed on or under the vehicle’s bumper. To detect the vehicle’s position within a lane, the sensors and cameras may be placed on or under the vehicle’s sides. The lane detection interface 24 can be set up to detect one or more lanes edges, and the vehicle’s position relative to those lane edges. The lane detection interface 24, which may detect yellow or white lines associated with the lane, can be configured to detect them. If the lane detector interface 24 fails to detect the lane edges of both vehicles, the lane identification interface 24 can generate data that indicates the vehicle has crossed a portion.

The lane detection interface 24 can also be set up to detect objects within a specified range or close to the vehicle. It may also be programmed to notify the user when the object is detected. The lane detector interface 24 can generate data that indicates the object is too near or proximate the vehicle. The lane detection interface 24 can detect when the vehicle is too near or in danger of colliding. The lane detector interface 24 can also produce feedback to alert the driver when the vehicle is too close or in danger of colliding with another vehicle. Feedback can be visual, audio, sound, vibration, or a combination of both.

“The autonomous vehicle interfacing 10 may also include an autonomous activation interface28. The component interface 16, gaze tracking interface 20 and lane detection interface 24, may all be connected to the autonomous activation interface 28. The vehicle engine control block’s engine control unit 30 may also be in communication through the autonomous activation interface 28. The engine control unit 30, which controls the engine 32 and 34 of the vehicle, is connected to the engine control unit 30. The autonomous activation interface 28, which is based on user’s operating state and environmental conditions, can activate or deactivate autonomous controlled driving depending on the user. A partial autonomous driving mode, or full autonomous driving mode can be used to determine the level of autonomous controlled driving. A partial mode could include the control of the steering unit and the vehicle’s speed. You may have multiple levels of autonomous controlled driver. The level of distraction a user has can affect their operational state. This could be anything from focusing on the road, to falling asleep, or even being completely unconscious. The autonomous activation interfacing 28 can activate or modify the mode of autonomous controlled driving. This is done in response to the autonomous activation 28 which determines the level of distraction. In response to the autonomous activation interfacing 28, the autonomous activation 28 may activate the user-controlled driving to determine if the user is aware or if the environmental conditions are not present.

“The autonomous activation device 28 may receive data from any one of the component interface 16 or 20 and the gaze tracking interface 20 to determine if the driver is attentive, distracted, or how the vehicle is performing in the surrounding environment. To update or modify the vehicle’s operational status, data may be sent periodically or continuously. Based on the data received, the autonomous activation interface28 may also be configured to determine the user?s level of distraction. A level of autonomous driving may be linked to the user’s level in distraction. The autonomous vehicle interface 10 can determine whether the user is distracted by one or more active components. It may also consider if the user’s gaze is fixed on a component of the vehicle or another device. Control data representing activating autonomous driving may also be generated if the user is found to be distracted. This data can be transmitted to the engine controller unit 30 for processing, activating and controlling the steering system and engine. Once the user is no longer distracted, control data representing deactivating autonomous driving and activating user-controlled driving can be generated.

“In another way, the autonomous activation device 28 could be programmed to store and learn the user’s operational state or habits while the vehicle is being operated. Based on user habits and operational state, the autonomous activation interface 28 can create a profile. Based on the profile, the autonomous activation interface 28 can be configured to activate autonomous controlled driving. If the user falls asleep while driving, the autonomous activation interface can store and create a profile that indicates this. The autonomous activation interface28 may then activate fully autonomous controlled driving during the specified time period.

“FIG. “FIG. Particularly, FIG. FIG. 2 shows a vehicle moving along a road, and the interior of the vehicle’s cabin 40. A gaze tracking interface 20 may be installed in the vehicle cockpit 40. It could include a tracking device 42 that has one or more sensors and cameras. The tracking device 42 could be found on a portion of the vehicle cockpit 44, such as behind the steering wheel 46 or the dashboard 48. The tracking device 42 can also be found above or below the steering wheel 46. As shown in FIG. 2, the gaze tracking interface 20 can be set up to detect the user?s head or eye position relative to their gaze towards the road. 1. The gaze tracking interface transmits data 28 to the autonomous activation interface 28 that represents the user?s eye position or head position relative the user?s gaze towards the road. This information is used to help determine the user?s operational state.

“FIGS. “FIGS. FIG. FIG. An autonomous vehicle interface 10 could include a lane detection device 24 with one or several cameras or sensors 54 along the vehicle’s front bumper 56, or sides 58. The cameras and/or sensors 54 detect edges 60 on the lane 62 as the vehicle 50 moves along the road 52.

“If the user is distracted or in a disoriented state, the vehicle 50 might unintentionally cross the edge 60 of lane 62 into another lane 64, as shown at FIG. 3B) and the cameras or sensors 54 may not detect the edge 60 in the lane 62 either on one side of the vehicle 50, or both. Alternately, sensors and cameras may detect the edge 60 in the lane 62 when the sensor and/or camera pass over the edge 60. The lane detection interface 24 can trigger the autonomous activation 10 to activate full autonomous controlled driver or to switch from user mode to full autonomous controlled driving. 3C. 3C. The lane detection interface 24, which detects the vehicle’s lane 62 edges 60, and the autonomous activation Interface 28, determine that the user isn’t distracted or conscious, may disable autonomous controlled driving. If the interface 28 is successful, the vehicle 50 can enter full user-controlled driving.

“The lane detector interface 24 may also use the cameras and sensors to detect objects to determine if the vehicle 50 is too near or proximate another vehicle or object on a road.”

“While FIGS. FIGS. 3A, C, and B show one example of a transition from user-controlled to autonomous controlled driving. There are many other examples of switching between autonomous controlled driving levels. An example of this is activating semi- or partial autonomous controlled driving to drive the steering unit when the autonomous vehicle interface detects that the vehicle is too close or objects are nearby. A partial autonomous controlled driving can also be activated when the driver looks away from the road. The user might look at their mobile phone while driving, for example. Another example is activating full autonomous controlled driving when the user falls asleep at the wheel.

“FIGS. FIG. 4A and B illustrate another example of the aspects described in FIG. 1. FIGS. FIGS. 3A-C/4A, B, and 2A are just a few examples. There are many other embodiments that employ the core concepts of FIG. 1 can also be realized in accordance with the aspects described herein.”

“Referring FIGS. “Referring to FIGS. 4A and B, vehicle 400 includes an autonomous vehicle interfacing 10 coupled with a vehicle engine control unit 14. These components can be configured to work as described above. The transition between FIG. 4A and 4B show that the user’s eyes 420 can be switched from open to closed.

“Accordingly, using the aspects disclosed herein the autonomous vehicle interface 10, may be configured or coupled to a device capable of detecting this transition. The autonomous vehicle interface 10 can detect the transition and switch the vehicle 400 to autonomous control via the vehicle engine control 14 box.

“FIG. “FIG. An autonomous vehicle interface could include a component interface as well as a gaze tracking and lane detection interfaces. 1-4B. The autonomous vehicle interface can be set up to control the vehicle via autonomous controlled driving, or switching from user controlled to autonomous controlled driving depending on the user’s operational status. The autonomous vehicle interface can also be set up to activate the autonomous control, or switch from autonomous controlled to user controlled driving according to the user’s operational status.

The method could include monitoring the user’s operating state while driving 100. The user’s operational status may be determined by their level of distraction or the surrounding environment. This may be done by looking at one or more conditions. The auto-activation interface might be programmed to correlate the number of conditions experienced by the user, or the level of distraction, with the level autonomous controlled driving to operate a vehicle. These conditions could include the surrounding environment, whether any components are active in the vehicle, whether the driver’s gaze is focused on the road or whether the vehicle is drifting out of the lane.

“In operation, the component interfacing may be configured to detect and receive data about active and inactive components within a vehicle. The component interface can generate and transmit data that indicates active and inactive components to an autonomous activation interface. The gaze tracking interface can detect the position of the user’s eyes or head relative to the road and may transmit data that indicates the position of the eye or head relative to road to the autonomous activation. The vehicle’s position within a lane may be determined by the lane detection module. This module can detect one or more lane edges, and determine the vehicle position relative to the lane edges. The data may be generated and transmitted by the lane detection module to indicate the vehicle’s location relative to the lane edges. The data may be received and processed by the autonomous activation interface, which may then determine the level of distraction the user is experiencing based on one or more conditions.

The autonomous activation interface can generate control data that indicates the user’s operational status 102, once the user’s operational condition is detected. If the user is in a disoriented state, the autonomous activation interface may generate control data that indicates the user’s operational status. If the user’s operational status is aware, the autonomous activation interface will generate control data that indicates the user is awake and not distracted. To activate the vehicle or change its mode of operation, the control data can be transmitted to the engine control unit via the autonomous activation interface 104. The mode of operation can be changed by the engine control unit and the autonomous vehicle interface without the need for user input.

“After the vehicle’s mode has been changed to autonomous controlled driving, the interface of the autonomous vehicle may detect the user’s operational status while the vehicle is in autonomous controlled driving 108. The vehicle will continue to operate in autonomous controlled driving if the user is not distracted. Autonomous activation interface can generate control data to indicate that the user is aware if they are detected being conscious. The autonomous activation interface can transmit control data to 110 to activate or modify the mode of operation, from autonomous controlled driving to user controlled driving 112 once the user has been deemed conscious.

“FIGS. “FIGS. These methods 600 and 700 serve as examples and are not intended to be a complete description of all possible methods/operations. 1.”

“In operation610, an environment condition might be detected. The environmental condition could be, for example, inclement weather. Another situation is when the environment condition could be dangerous (e.g. It may be wet or icy. If the condition is determined to be unsafe, the vehicle is switched to user-controlled (or in another example, to an autonomous mode, depending on how the system is configured)?operation 620 to 630. If the condition is safe, the vehicle will be switched to an autonomous mode (operations 620-625) or vice versa, depending on how it is implemented.

“Method 700” is another example implementation of the features described herein. Operation 710 determines whether the vehicle is in a user controlled mode. If the vehicle is not in a user-controlled mode, an operation 710 of a wait 715 takes place for a predetermined period of time (set by the implementer of method 700). Operation 715 can be replaced with another example implementation that continuously monitors the state.

“In operation 720, an lane sensor is used and a determination made as to whether a lanes was crossed (for instance, FIGS. 3A-C). In operation 730, if the lane is crossed the vehicle switches to an autonomous mode.

“Some of the devices in FIG. 1. may be implemented using a computer system or the aid of one. The computing system includes a processor, (CPU), and a bus that links various components of the system to it. This includes system memory, such as random access memory (RAM) or read only memory (ROM). There may also be other system memory that can be used. A computing system could include multiple processors or a cluster or group of computing systems that are connected together to increase processing power. A system bus can be any one of several bus structures, including a memory bus, memory controller, peripheral bus and a local bus that uses any of a number of bus architectures. Basic input/output routines (BIOS), stored in the ROM, or similar, can be used to transfer information between elements of the computing system. This is often done during startup. Data stores are also part of the computing system. They store a database according known database management systems. Data stores can be in many forms: a hard drive, magnetic disk drive or optical disk drive; tape drive, tape drive or any other type of computer-readable media that can store data. A drive interface may connect the data stores to the system bus. Data stores are nonvolatile data storage that can be used to store computer-readable instructions, data structures, and program modules, as well as other data, for the computing system.

The computing system can include an input device to enable human (and sometimes machine) interaction. This could be a microphone for speech, touch sensitive screen for gestures or graphical input, keyboard and mouse, and a motion input. One or more of many output mechanisms can be included in an output device. Multimodal systems allow users to input multiple inputs to the computing system. A communications interface allows the computing device to communicate with other computing devices via various network protocols and communication channels.

“The disclosure above refers to a variety of flow charts and accompanying descriptions to illustrate embodiments represented by FIGS. 5, 6, and 7. These disclosed components, devices, and systems can be used or implemented in any manner that is suitable for the tasks illustrated in these figures. Thus, FIGS. FIGS. 5, 6 and 7 are only for illustration purposes. The described steps or similar steps can be performed at any time, concurrently, individually or together. Many of these steps may be performed simultaneously, in different order or in different sequences than what is shown. The disclosed systems could also use methods and processes with different, additional or fewer steps.

“Embodiments described herein can be implemented in digital electronics circuitry or in computer software or firmware or hardware, which includes the herein disclosed structures or their equivalents. Some embodiments may be implemented using one or more computer programmes, i.e. one or several modules of computer program instructions encoded on tangible computer storage media for execution by one, or more processors. A computer storage medium may include or be included in a computer readable storage device, computer-readable substrate, or random or serial access memory. A computer storage medium may also include or be included in one or more tangible components or media, such as multiple CDs or disks or other storage devices. A transitory signal is not included in the computer storage medium.

“The term processor as used herein refers to all types of apparatus, devices and machines that process data. This includes, for example, a programmable processor or a computer. It can also include multiple or a combination of any of the above. The processor can include special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit). A processor can also include code that creates an environment for the computer program, such as code that is processor firmware, protocol stack, operating system, cross-platform runtime environment or a combination thereof.

Computer programs can also be called a module or engine, program, software, program application, script or code. They can be written in any programming language, whether compiled or interpreted, and can be deployed as either a standalone program or as modules, components, subroutines, objects or any other form suitable for use within a computing environment. A computer program can correspond to any file in a file system, but not necessarily. You can store a program in a part of a file that contains other programs or data, such as one or more scripts in a markup languages document, in a single file that is dedicated to the program, or in multiple coordinated file (e.g. files that contain one or more modules, subprograms or parts of code). You can deploy a computer program to run on one computer, or on multiple computers located at the same site.

The interactive displays of the disclosed embodiments allow for interaction with individuals. Interactive features may include pop-up and pull-down lists, selection tabs or features, scannable features and other features that are open to human input.

“The computing system described herein can include both clients and servers. Client and server are usually separated by distance and communicate via a communication network. Computer programs on each computer create a client-server relationship. A server may transmit data (e.g. an HTML page) from one client device to another. This is done in order to display data to the client device and receive input from the user who interacts with it. The server can receive data generated at the client device (e.g., the result of the user interaction).

Summary for “Autonomous driving interface”

“Semi-autonomous, autonomous vehicles such as robots or driverless vehicles are in development.” These vehicles can sense and navigate in the environment around them without the need for human input. There are many elements that can be used to make the vehicles work, including radar tracking, a global positioning system (GPS), computer vision, and others. The vehicles can also be used to map and update the vehicle’s driving routes based on any conditions or changes encountered during operation.

The vehicle will notify the user via audio or visual to request that they activate or switch to autonomous mode. The vehicle can also notify the user by performing a mechanical alert such as jarring or vibration of their seat. To return to regular driving, the autonomous mode must be deactivated once the user has activated it.

“However there are many problems with the semi-autonomous or autonomous vehicles at present. The first is the speed at which audio, visual, and mechanical alerts are displayed. This can cause the user to become anxious and jerk the steering column or create unsafe driving conditions. The semi-autonomous or autonomous modes require that the user activate the mode. This can be difficult, or even impossible, depending on the user?s current state. The autonomous modes may not work if the driver falls asleep behind the wheel. The transition from driving mode into autonomous mode is not seamless. The semi-autonomous or autonomous vehicles of today do not consider all circumstances experienced by drivers before notifying them to change into autonomous mode.

“The disclosed aspects include a system that provides an autonomous vehicle interface for vehicles. The system comprises an autonomous activation device that switches between an autonomous vehicle and user-controlled mode. A component interface is configured to interface with the vehicle’s component module. In response to the component interface sensing a stimulus independent of the component module, the component interface triggers the autonomous activation interface to switch from the user-controlled mode to the autonomous.

“The component interface could be configured to pair to a mobile device detector device and, in response to the user engaging the mobile device, initiate the switch to the autonome mode.”

“In another example, the interface could be configured to couple with a global positioning satellite device (GPS), and respond to the user engaging the GPS device, initiate the switch to the autonomic mode.”

“In another example, a component interface is configured for coupling to an infotainment gadget, and in response the user engaging the infotainment devices, initiating the switch to autonomous mode.”

“Another example is the engagement of the vehicle?s engine independent user control.

“In another example, an engagement of the vehicle?s steering independent user control defines the autonomous mode.”

“Another aspect is disclosed herein and includes an autonomous activation interfacing that switches between an autonomous vehicle and a user controlled vehicle modes; and a gaze track interface configured for interface with a gaze-tracking device and, in response to the gaze tracking sensing a particular event, trigger the autonomous activation interfacing to switch from the user-controlled mode to the autonomous mode.”

“In another example the user can define the event by turning his eyes away from the front of the vehicle.”

“In another example the specific event can be defined by the user closing their eyes.”

“Another example is the engagement of the vehicle?s engine independent user control.

“In another example, an engagement of the vehicle?s steering independent user control defines the autonomous mode.”

“Another aspect is disclosed herein and includes an autonomous activation interface that switches between an autonomous vehicle and a user controlled vehicle modes; and a vehicular sensors interface that interfaces with a sensor associated to the vehicle. In response to the vehicle sensor interaction sensing a predetermined event independent of a user activation, the autonomous activation interfacing triggers the switch to the autonomous mode from the user-controlled mode.

“Another example is a weather sensor.”

“Another example is a lane detection sensor.

“A road condition sensor” is another example of a sensor.

“Detailed examples of this disclosure are given herein. However, it is to understand that the disclosed examples are only exemplary and could be incorporated in other and alternate forms. These examples are not meant to illustrate or describe all forms of disclosure. The specification is a description of rather than a limitation. It is possible to make changes without affecting the scope and spirit of the disclosure. Many features of the disclosure, as illustrated and described in any of the Figures, can be combined with other features in one or more Figures to create examples that are not explicitly described or illustrated. These examples are representative of typical applications. For specific applications or implementations, it may be desirable to combine and modify the features in accordance with the teachings of this disclosure.

“An autonomous vehicle interface that doesn’t startle the user. It seamlessly transitions from a nonautonomous controlled driving to an autonomous driving without user activation and takes into consideration the user’s state and surrounding environment conditions while driving.

The present disclosure provides an autonomous vehicle interface (AVI) for a vehicle and a method for enabling and using an autonomous vehicle interface. The interface for an autonomous vehicle may seamlessly switch between driving modes, or the vehicle’s operation mode from user-controlled to autonomous controlled driving. This is without the user having to activate the autonomous control. The vehicle can seamlessly switch between driving modes, switching between autonomous controlled driving levels based on the user’s operational status. An autonomous vehicle can be programmed to associate the user’s operating state with the environment and to activate a level autonomous controlled driving. An autonomous vehicle interface can be further configured to activate autonomous controlled driving depending on the user’s operational status. User’s operational state could be defined as their level of attention or distraction. An autonomous vehicle interface could also be set up to take into account active components used by the user when activating autonomous controlled driving. Active components could include vehicle components that have been turned on or are operating, such as a vehicle radio. The autonomous vehicle interface can also be set up to control the mode of operation of the vehicle from autonomous controlled to user-controlled driving in response to changes in the operational state environment conditions.

“FIG. “FIG. An autonomous vehicle interface 10 can interface with a vehicle cockpit 12 to communicate with components and electronics in the vehicle cockpit. An autonomous vehicle interface 10 could also interface with the vehicle’s engine control block 14, which allows it to control and communicate with the engine and steering of the vehicle in response to autonomous controlled driving.

“The component interface 16 of the autonomous vehicle interface 10 can be connected to one or more components within the vehicle cockpit bloc 18. The component interface 16 can be used to receive or detect data about active and inactive components within vehicle cockpit block 12. These components could include an entertainment system or display, a global positioning system (GPS), communications module, and others. Active components can be any components that are being used by the user or that are turning ON. The radio playing, GPS directions displayed or phone communication with another person through the vehicle could be examples of active components. Inactive components are components that aren’t being used or experienced at the time.

“A gaze tracking interface 20 may be included in the autonomous vehicle interface 10. A gaze tracking interface 20 could interface with a gazing block 22 in the vehicle cockpit block 12. Gaze tracking interface 20 can include sensors or cameras or any combination thereof to detect the user’s head or eye position while driving. The sensors and cameras may be found above the steering wheel in the vehicle cockpit. The sensors and/or cameras may also be mounted on the steering wheels or on a section of the dashboard behind it. The interface 20 can be set up to detect the user?s gaze relative the road, specifically the position of the user’s head or eye relative to that road. The gaze tracking interface 20 can also detect if the user is focusing on components in the vehicle cockpit relative the road. The cameras and/or sensors could detect if the user’s gaze is on another component or mobile device within the vehicle.

“The autonomous vehicle interface 10, may also include a 24 lane detection interface. The vehicle cockpit block 14 may have a lane detection interface 24. One or more cameras, sensors or a combination of both may be included in the lane detection interface 24, which can include one or several sensors. To detect the vehicle’s location within a given lane, the sensors and cameras may be placed on or under the vehicle’s bumper. To detect the vehicle’s position within a lane, the sensors and cameras may be placed on or under the vehicle’s sides. The lane detection interface 24 can be set up to detect one or more lanes edges, and the vehicle’s position relative to those lane edges. The lane detection interface 24, which may detect yellow or white lines associated with the lane, can be configured to detect them. If the lane detector interface 24 fails to detect the lane edges of both vehicles, the lane identification interface 24 can generate data that indicates the vehicle has crossed a portion.

The lane detection interface 24 can also be set up to detect objects within a specified range or close to the vehicle. It may also be programmed to notify the user when the object is detected. The lane detector interface 24 can generate data that indicates the object is too near or proximate the vehicle. The lane detection interface 24 can detect when the vehicle is too near or in danger of colliding. The lane detector interface 24 can also produce feedback to alert the driver when the vehicle is too close or in danger of colliding with another vehicle. Feedback can be visual, audio, sound, vibration, or a combination of both.

“The autonomous vehicle interfacing 10 may also include an autonomous activation interface28. The component interface 16, gaze tracking interface 20 and lane detection interface 24, may all be connected to the autonomous activation interface 28. The vehicle engine control block’s engine control unit 30 may also be in communication through the autonomous activation interface 28. The engine control unit 30, which controls the engine 32 and 34 of the vehicle, is connected to the engine control unit 30. The autonomous activation interface 28, which is based on user’s operating state and environmental conditions, can activate or deactivate autonomous controlled driving depending on the user. A partial autonomous driving mode, or full autonomous driving mode can be used to determine the level of autonomous controlled driving. A partial mode could include the control of the steering unit and the vehicle’s speed. You may have multiple levels of autonomous controlled driver. The level of distraction a user has can affect their operational state. This could be anything from focusing on the road, to falling asleep, or even being completely unconscious. The autonomous activation interfacing 28 can activate or modify the mode of autonomous controlled driving. This is done in response to the autonomous activation 28 which determines the level of distraction. In response to the autonomous activation interfacing 28, the autonomous activation 28 may activate the user-controlled driving to determine if the user is aware or if the environmental conditions are not present.

“The autonomous activation device 28 may receive data from any one of the component interface 16 or 20 and the gaze tracking interface 20 to determine if the driver is attentive, distracted, or how the vehicle is performing in the surrounding environment. To update or modify the vehicle’s operational status, data may be sent periodically or continuously. Based on the data received, the autonomous activation interface28 may also be configured to determine the user?s level of distraction. A level of autonomous driving may be linked to the user’s level in distraction. The autonomous vehicle interface 10 can determine whether the user is distracted by one or more active components. It may also consider if the user’s gaze is fixed on a component of the vehicle or another device. Control data representing activating autonomous driving may also be generated if the user is found to be distracted. This data can be transmitted to the engine controller unit 30 for processing, activating and controlling the steering system and engine. Once the user is no longer distracted, control data representing deactivating autonomous driving and activating user-controlled driving can be generated.

“In another way, the autonomous activation device 28 could be programmed to store and learn the user’s operational state or habits while the vehicle is being operated. Based on user habits and operational state, the autonomous activation interface 28 can create a profile. Based on the profile, the autonomous activation interface 28 can be configured to activate autonomous controlled driving. If the user falls asleep while driving, the autonomous activation interface can store and create a profile that indicates this. The autonomous activation interface28 may then activate fully autonomous controlled driving during the specified time period.

“FIG. “FIG. Particularly, FIG. FIG. 2 shows a vehicle moving along a road, and the interior of the vehicle’s cabin 40. A gaze tracking interface 20 may be installed in the vehicle cockpit 40. It could include a tracking device 42 that has one or more sensors and cameras. The tracking device 42 could be found on a portion of the vehicle cockpit 44, such as behind the steering wheel 46 or the dashboard 48. The tracking device 42 can also be found above or below the steering wheel 46. As shown in FIG. 2, the gaze tracking interface 20 can be set up to detect the user?s head or eye position relative to their gaze towards the road. 1. The gaze tracking interface transmits data 28 to the autonomous activation interface 28 that represents the user?s eye position or head position relative the user?s gaze towards the road. This information is used to help determine the user?s operational state.

“FIGS. “FIGS. FIG. FIG. An autonomous vehicle interface 10 could include a lane detection device 24 with one or several cameras or sensors 54 along the vehicle’s front bumper 56, or sides 58. The cameras and/or sensors 54 detect edges 60 on the lane 62 as the vehicle 50 moves along the road 52.

“If the user is distracted or in a disoriented state, the vehicle 50 might unintentionally cross the edge 60 of lane 62 into another lane 64, as shown at FIG. 3B) and the cameras or sensors 54 may not detect the edge 60 in the lane 62 either on one side of the vehicle 50, or both. Alternately, sensors and cameras may detect the edge 60 in the lane 62 when the sensor and/or camera pass over the edge 60. The lane detection interface 24 can trigger the autonomous activation 10 to activate full autonomous controlled driver or to switch from user mode to full autonomous controlled driving. 3C. 3C. The lane detection interface 24, which detects the vehicle’s lane 62 edges 60, and the autonomous activation Interface 28, determine that the user isn’t distracted or conscious, may disable autonomous controlled driving. If the interface 28 is successful, the vehicle 50 can enter full user-controlled driving.

“The lane detector interface 24 may also use the cameras and sensors to detect objects to determine if the vehicle 50 is too near or proximate another vehicle or object on a road.”

“While FIGS. FIGS. 3A, C, and B show one example of a transition from user-controlled to autonomous controlled driving. There are many other examples of switching between autonomous controlled driving levels. An example of this is activating semi- or partial autonomous controlled driving to drive the steering unit when the autonomous vehicle interface detects that the vehicle is too close or objects are nearby. A partial autonomous controlled driving can also be activated when the driver looks away from the road. The user might look at their mobile phone while driving, for example. Another example is activating full autonomous controlled driving when the user falls asleep at the wheel.

“FIGS. FIG. 4A and B illustrate another example of the aspects described in FIG. 1. FIGS. FIGS. 3A-C/4A, B, and 2A are just a few examples. There are many other embodiments that employ the core concepts of FIG. 1 can also be realized in accordance with the aspects described herein.”

“Referring FIGS. “Referring to FIGS. 4A and B, vehicle 400 includes an autonomous vehicle interfacing 10 coupled with a vehicle engine control unit 14. These components can be configured to work as described above. The transition between FIG. 4A and 4B show that the user’s eyes 420 can be switched from open to closed.

“Accordingly, using the aspects disclosed herein the autonomous vehicle interface 10, may be configured or coupled to a device capable of detecting this transition. The autonomous vehicle interface 10 can detect the transition and switch the vehicle 400 to autonomous control via the vehicle engine control 14 box.

“FIG. “FIG. An autonomous vehicle interface could include a component interface as well as a gaze tracking and lane detection interfaces. 1-4B. The autonomous vehicle interface can be set up to control the vehicle via autonomous controlled driving, or switching from user controlled to autonomous controlled driving depending on the user’s operational status. The autonomous vehicle interface can also be set up to activate the autonomous control, or switch from autonomous controlled to user controlled driving according to the user’s operational status.

The method could include monitoring the user’s operating state while driving 100. The user’s operational status may be determined by their level of distraction or the surrounding environment. This may be done by looking at one or more conditions. The auto-activation interface might be programmed to correlate the number of conditions experienced by the user, or the level of distraction, with the level autonomous controlled driving to operate a vehicle. These conditions could include the surrounding environment, whether any components are active in the vehicle, whether the driver’s gaze is focused on the road or whether the vehicle is drifting out of the lane.

“In operation, the component interfacing may be configured to detect and receive data about active and inactive components within a vehicle. The component interface can generate and transmit data that indicates active and inactive components to an autonomous activation interface. The gaze tracking interface can detect the position of the user’s eyes or head relative to the road and may transmit data that indicates the position of the eye or head relative to road to the autonomous activation. The vehicle’s position within a lane may be determined by the lane detection module. This module can detect one or more lane edges, and determine the vehicle position relative to the lane edges. The data may be generated and transmitted by the lane detection module to indicate the vehicle’s location relative to the lane edges. The data may be received and processed by the autonomous activation interface, which may then determine the level of distraction the user is experiencing based on one or more conditions.

The autonomous activation interface can generate control data that indicates the user’s operational status 102, once the user’s operational condition is detected. If the user is in a disoriented state, the autonomous activation interface may generate control data that indicates the user’s operational status. If the user’s operational status is aware, the autonomous activation interface will generate control data that indicates the user is awake and not distracted. To activate the vehicle or change its mode of operation, the control data can be transmitted to the engine control unit via the autonomous activation interface 104. The mode of operation can be changed by the engine control unit and the autonomous vehicle interface without the need for user input.

“After the vehicle’s mode has been changed to autonomous controlled driving, the interface of the autonomous vehicle may detect the user’s operational status while the vehicle is in autonomous controlled driving 108. The vehicle will continue to operate in autonomous controlled driving if the user is not distracted. Autonomous activation interface can generate control data to indicate that the user is aware if they are detected being conscious. The autonomous activation interface can transmit control data to 110 to activate or modify the mode of operation, from autonomous controlled driving to user controlled driving 112 once the user has been deemed conscious.

“FIGS. “FIGS. These methods 600 and 700 serve as examples and are not intended to be a complete description of all possible methods/operations. 1.”

“In operation610, an environment condition might be detected. The environmental condition could be, for example, inclement weather. Another situation is when the environment condition could be dangerous (e.g. It may be wet or icy. If the condition is determined to be unsafe, the vehicle is switched to user-controlled (or in another example, to an autonomous mode, depending on how the system is configured)?operation 620 to 630. If the condition is safe, the vehicle will be switched to an autonomous mode (operations 620-625) or vice versa, depending on how it is implemented.

“Method 700” is another example implementation of the features described herein. Operation 710 determines whether the vehicle is in a user controlled mode. If the vehicle is not in a user-controlled mode, an operation 710 of a wait 715 takes place for a predetermined period of time (set by the implementer of method 700). Operation 715 can be replaced with another example implementation that continuously monitors the state.

“In operation 720, an lane sensor is used and a determination made as to whether a lanes was crossed (for instance, FIGS. 3A-C). In operation 730, if the lane is crossed the vehicle switches to an autonomous mode.

“Some of the devices in FIG. 1. may be implemented using a computer system or the aid of one. The computing system includes a processor, (CPU), and a bus that links various components of the system to it. This includes system memory, such as random access memory (RAM) or read only memory (ROM). There may also be other system memory that can be used. A computing system could include multiple processors or a cluster or group of computing systems that are connected together to increase processing power. A system bus can be any one of several bus structures, including a memory bus, memory controller, peripheral bus and a local bus that uses any of a number of bus architectures. Basic input/output routines (BIOS), stored in the ROM, or similar, can be used to transfer information between elements of the computing system. This is often done during startup. Data stores are also part of the computing system. They store a database according known database management systems. Data stores can be in many forms: a hard drive, magnetic disk drive or optical disk drive; tape drive, tape drive or any other type of computer-readable media that can store data. A drive interface may connect the data stores to the system bus. Data stores are nonvolatile data storage that can be used to store computer-readable instructions, data structures, and program modules, as well as other data, for the computing system.

The computing system can include an input device to enable human (and sometimes machine) interaction. This could be a microphone for speech, touch sensitive screen for gestures or graphical input, keyboard and mouse, and a motion input. One or more of many output mechanisms can be included in an output device. Multimodal systems allow users to input multiple inputs to the computing system. A communications interface allows the computing device to communicate with other computing devices via various network protocols and communication channels.

“The disclosure above refers to a variety of flow charts and accompanying descriptions to illustrate embodiments represented by FIGS. 5, 6, and 7. These disclosed components, devices, and systems can be used or implemented in any manner that is suitable for the tasks illustrated in these figures. Thus, FIGS. FIGS. 5, 6 and 7 are only for illustration purposes. The described steps or similar steps can be performed at any time, concurrently, individually or together. Many of these steps may be performed simultaneously, in different order or in different sequences than what is shown. The disclosed systems could also use methods and processes with different, additional or fewer steps.

“Embodiments described herein can be implemented in digital electronics circuitry or in computer software or firmware or hardware, which includes the herein disclosed structures or their equivalents. Some embodiments may be implemented using one or more computer programmes, i.e. one or several modules of computer program instructions encoded on tangible computer storage media for execution by one, or more processors. A computer storage medium may include or be included in a computer readable storage device, computer-readable substrate, or random or serial access memory. A computer storage medium may also include or be included in one or more tangible components or media, such as multiple CDs or disks or other storage devices. A transitory signal is not included in the computer storage medium.

“The term processor as used herein refers to all types of apparatus, devices and machines that process data. This includes, for example, a programmable processor or a computer. It can also include multiple or a combination of any of the above. The processor can include special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit). A processor can also include code that creates an environment for the computer program, such as code that is processor firmware, protocol stack, operating system, cross-platform runtime environment or a combination thereof.

Computer programs can also be called a module or engine, program, software, program application, script or code. They can be written in any programming language, whether compiled or interpreted, and can be deployed as either a standalone program or as modules, components, subroutines, objects or any other form suitable for use within a computing environment. A computer program can correspond to any file in a file system, but not necessarily. You can store a program in a part of a file that contains other programs or data, such as one or more scripts in a markup languages document, in a single file that is dedicated to the program, or in multiple coordinated file (e.g. files that contain one or more modules, subprograms or parts of code). You can deploy a computer program to run on one computer, or on multiple computers located at the same site.

The interactive displays of the disclosed embodiments allow for interaction with individuals. Interactive features may include pop-up and pull-down lists, selection tabs or features, scannable features and other features that are open to human input.

“The computing system described herein can include both clients and servers. Client and server are usually separated by distance and communicate via a communication network. Computer programs on each computer create a client-server relationship. A server may transmit data (e.g. an HTML page) from one client device to another. This is done in order to display data to the client device and receive input from the user who interacts with it. The server can receive data generated at the client device (e.g., the result of the user interaction).

Click here to view the patent on Google Patents.