Autonomous Vehicles – John-Michael McNew, Kazutoshi Ebe, Danil V. Prokhorov, Michael J. Delp, Toyota Motor Corp

Abstract for “Apparatus & method for transitioning between driving states during navigation of highly automated vechicle”

A navigation apparatus for an autonomous car includes circuitry that can receive at most one route between a starting location and a destination, display the route on a screen that allows selections of a first set route from the at minimum one route, and receive a plurality corresponding to each route. Each characteristic is associated with a measure or a block that allows each characteristic to be performed continuously. Circuitry is further configured to divide the route of at least one route to create a plurality based on the plurality characteristics. The first set of routes, each plurality corresponding to the first set, and the measure and longest block corresponding with the plurality are displayed on a second screen.

Background for “Apparatus & method for transitioning between driving states during navigation of highly automated vechicle”

“Field of the Disclosure.”

“This invention relates to improvements in highly-automated or autonomous vehicles. The present disclosure is more specific and relates to the application of driver preferences or historical driving performance to determine an optimal route for highly automated autonomous vehicles.

“Description of Related Art”

“A conventional navigation system allows a driver to input a destination address. The navigation system then determines the direction to that destination using the map available to it. To determine the current vehicle position, the conventional navigation system has a global positioning sensor (GPS). The conventional navigation system uses audible or visual instructions to guide the driver to their destination based on the vehicle’s current position. The conventional navigation system chooses which route to give the driver, as there are many routes between the current position (and the destination). The route is often chosen by taking into account factors such as travel distance and travel time that are important for non-autonomous driving. Some navigation systems incorporate traffic and road event information, such as repairs or accidents, into the directions. This allows the driver to choose a more congested route.

Although they are useful and appropriate for non-autonomous vehicles, the conventional navigations are not adaptable and don’t account for factors that might be relevant for autonomous driving. For example, the traditional navigation system routing might not be able account for hands-free time or sleep time which are some advantages of the autonomous vehicle over a non-autonomous one during route selection and determination.

Autonomous vehicles are the next generation of automobiles with a higher level of functionality and improved vehicle performance. Autonomous vehicles not only improve driving performance and overall vehicle efficiency, but also allow drivers to sleep, take their hands off of the wheel, and look at the road. In certain circumstances. Smart components, including smart sensors and communication with other vehicles, are required to enable these increased capabilities. A more efficient routing system or navigation system is essential to enable the vehicle to have greater capabilities. Conventional navigation systems are limited in capabilities and can be restricted. There is a constant need for improved navigation systems to be used by autonomous vehicles.

A navigation apparatus is described in an embodiment of the invention. A navigation apparatus for autonomous vehicles includes circuitry that can receive via a network at least one path between a starting location and a destination. The first screen allows you to select a first set or routes. Each characteristic associated with the route is associated with a measure, and the longest block within which each characteristic may be performed continuously. The circuitry can also be configured to divide the route of the at most one route to create a plurality based on the plurality characteristics. A first segment is highlighted using a first identifier that corresponds to a first characteristic. A second segment is highlighted using a second identifyr that corresponds to a third characteristic of each of the plurality. Finally, display the first set routes, the plurality characteristics corresponding the first set, and the measure and longest block corresponding the plurality on a secondary screen.

“A method for navigation of an autonomous car is also provided according to an embodiment in the present disclosure. The method involves receiving via a network at least one path between a starting location and a destination, and displaying that route on a screen that allows selection from a first set route. Each characteristic of the plurality is associated with a measure or a block within which each characteristic may be performed continuously. A method also involves dividing a route from the at least one route using the processing logic to create a plurality based on the plurality characteristics. A first segment is highlighted using a first identifier that corresponds to a first characteristic in the plurality. A second segment is highlighted using a second identifyr that corresponds to a third characteristic of each of the plurality. Finally, the processing circuitry displays the first set routes, the plurality corresponding the first set, as well as the measure and longest block

“Further,” according to the embodiment of the disclosure, there is a non-transitory computer readable medium that stores a program that, when executed by the computer, causes it to perform the method of navigation for an autonomous vehicle as discussed above.

“The above general description of the illustrated implementations and the detailed description thereof are only exemplary aspects of this disclosure and are not intended to be restrictive.”

“The following description is in conjunction with the attached drawings is intended to describe various embodiments of disclosed subject matter. It is not intended that it represents the only embodiment. The description may contain specific details in certain cases to help you understand the disclosed embodiments. It will be obvious to those who are skilled in the arts that the disclosed embodiments may be used without these specific details. Sometimes, well-known components and structures may be displayed in block diagrams to help avoid obscure the concepts of the disclosed subject matter.

“Refer throughout the specification only to?one embodiment?” or ?an embodiment? It means that at least one embodiment contains a particular structure, feature, or characteristic. The phrase ‘in one embodiment? Or?in one embodiment? The same embodiment may be mentioned in different places in the specification. In one or more embodiments, particular features, structures, or characteristics can be combined in any way that is most appropriate. It is also intended that embodiments of disclosed subject matter include modifications and variations.

“It is important to note that the singular forms?a and?an in the specification and the attached claims are used. ?an,? ?an,? If the context requires otherwise, plural referents are included. This is, unless the context expressly dictates otherwise. ?an,? ?the,? The meanings of the words?the,? and the like are?one or more. It is also important to understand that terms like?left?,?, and?right? are not interchangeable. ?right,? These terms and others that could be used herein are merely examples and may not limit the scope of embodiments of this disclosure to any particular configuration or orientation. Additionally, terms like?first?,? ?second,? ?third,? ?third,?

“Furthermore the terms?approximately? ?proximate,? ?minor,? These terms and others generally refer to ranges that include the identified values within a margin between 20%, 10%, or preferably 5% in some embodiments and any other values therebetween.

“FIG. “FIG. 1” illustrates a navigation device for an autonomous vehicle in accordance with an exemplary embodiment. The navigation apparatus 10 is part of the autonomous vehicle. There are also various electronic and mechanical parts. Although certain aspects of this disclosure are useful for specific vehicles, others may be applicable to any vehicle, including cars, trucks and motorcycles, buses, boats, planes, helicopters or lawnmowers. An autonomous vehicle could be a fully autonomous vehicle or semi-autonomous vehicle. It can also include vehicles with advanced driver assistance systems (ADAS), such as adaptive cruise control, lane departure alert, and vehicle equipped with lane departure warning and lane departure alert. The autonomous vehicle can also include the following features: a steering device such as steering wheel 101; a navigation apparatus 10, with a touch screen 102; and a driving mode selector system such as a modeshifter 115. You may also find various input devices in the vehicle, including a mode switch 115, touch screen 102, or button inputs103 that allow you to activate or deactivate one or more autonomous driving modes. Also, it can be used to enable a driver 120 (or passenger) to give information, such as a navigation destination, to the navigation apparatus 10. One embodiment of the invention allows the navigation apparatus 10 to be integrated into the dashboard of an autonomous vehicle. The navigation apparatus 10 may also be attached to the dashboard or windshield of an autonomous vehicle.

The autonomous vehicle may optionally have more than one display. The vehicle can include a second display (113) to show information about the status of the autonomous car, navigation status information from the navigation apparatus 10, and other vehicle status information from a computer, such as an ECU (electronic control unit) installed on the vehicle. In the current example, the second display 113 displays a driving mode (?D?) and a speed (?20?). The second display 113 displays a driving mode?D? and a speed of?20? The vehicle is currently in drive mode and moving at 20 mph. If the drive switches to autonomous mode, the second display (113) can show the driving mode as “AV”. The vehicle is now in autonomous mode. Other modes include a hands-free mode?HF?”, an eyes-off road mode??EOR?, and autopilot.

“In one embodiment, navigation apparatus 10 can communicate with other components of the vehicle like the vehicle’s traditional ECU (not illustrated) and can send or receive information from different systems of the autonomous car, such as a brake pedal107, an accelerator109, and the steering wheels 105 to control movement, speed, etc.

The autonomous vehicle may also include a GPS receiver that determines the device’s latitude and longitude. An accelerometer, a GPS receiver, or other direction/speed detection devices can be used to determine the vehicle’s speed and changes. The vehicle may also have components that detect objects and conditions outside of the vehicle, such as vehicles, other vehicles, obstructions, traffic signals, signs and trees, etc. Lasers, sonar, radar detection unit (such as those used to adaptive cruise control), cameras or any other detection device that records data and sends signal to the ECU may be part of the detection system. The autonomous vehicle may also be equipped with a DSRC sensor and an AV penetration sensor. These sensors allow for detection of autonomous vehicles within their range and enable communication with them. The AV penetration sensor may include LIDAR (light detection and ranging) which provides distance or range information, and a stereo camera that allows object recognition.

The vehicle can use the sensors to learn and possibly respond to its surroundings to ensure safety for both passengers and objects. The vehicle type, number, type, sensor locations, sensor fields, sensor fields, and sensor fields are just examples. Other configurations are also possible.

In one embodiment, the sensors described can receive input from sensors on non-autonomous vehicles. These sensors could include sensors such as tire pressure sensors and engine temperature sensors. They also may include brake heat sensors, brake pads status sensors, fuel quality sensors, oil level sensors, oil quality sensors, and air quality sensors.

“The server 20, ECU, or the navigation apparatus 10, can transmit or receive information from or to other computers. A map that is stored on the server 20, for example, can be transferred to the navigation apparatus 10. Sensor data from the autonomous vehicle can also be sent to the server to be processed. Data from other vehicles can also be stored in the server 20 database and can be used to navigate. The server 20 can store sensor information and other navigation-related functions (described in FIGS. 2A-2C, 3-5 can be implemented by a central architecture. This means that the information from the sensors and navigation related functions (described with reference to FIGS. Alternately, you can use a distributed architecture, where the navigation-related functions are processed partly by the server 20, and partially by navigation apparatus 10.

The navigation apparatus 10 is a human-machine interface (HMI), which allows the user to see and select different routes starting at a specific location. The characteristics of a route are the states that the driver 110 can be in while the autonomous vehicle moves and other factors specific for the route. The characteristics of a route could include driver states like hands-free, eyes-off road, sleeping, reclining and forward seating, manual, eyes needed to be on a dedicated display, safety, travel time, distance, etc. The clonable screen refers to a display that the driver can use to copy the route onto an external device, such as a smartphone or tablet. Because the external device can be placed high up on the dashboard, it is possible to know more about the timing of reengagement once a warning has been issued. There may be areas where the eyes of the external device are allowed but not the eyes of the road. HMIs can contain a software program that is executed on the controller of the navigation apparatus 10. This allows for multiple screens to be displayed. The HMI can also include user inputs and results. Three screens are shown in various embodiments of this disclosure to illustrate various aspects of the disclosure.

The navigation apparatus 10 may also determine the route characteristics, which can allow the autonomous vehicle to operate in a teammate mode. The autonomous vehicle can perform certain operations, such as steering, acceleration and so on, with no driver input, along sections of the route that involve winding roads, severe weather conditions, driving for pleasure or driving in harsh traffic conditions. The autonomous vehicle can be handed over to the driver.

“FIG. 2A shows a first screen 200 for the navigation apparatus 10, according to an exemplary embodiment. The navigation apparatus 10’s first screen 200 displays at least one route button, such as the route buttons 201 to 202,203 and 204 that correspond to R1, R2, and R3. R4 respectively. A summary of the route can be displayed by the buttons 201-204 using parameters like travel time, hands-free time, scenery, and a map showing the route. The server 20 can determine the routes R1-R4. It implements routing algorithms, route optimization algorithm, and other navigation algorithms that are appropriate for the autonomous car. The first screen 200 may include a setting button 215. This allows the user to select route preferences, display summary parameters 202-204, store current routes, add favorites, display messages at specific points of interest, and so on. The server 20 can also take the above settings into consideration when determining routes.

To assist users in selecting a route, the route buttons 201-204 may be activated. A second screen 300 can display the characteristics of each route in one embodiment. (Further discussed with reference to FIG. 2B) to allow comparisons between different routes. Alternately, or in addition to the above, the user can select one or more routes (e.g. routes R1 or R4) based on the summary characteristics of the routes (e.g. routes R1 or R4). A route, such as R1, can be activated in certain embodiments by double-tapping the button 201 or pressing and holding the button 201.

“FIG. “FIG. The second screen 300 shows details about the selected routes (e.g. R1, R2, R3, etc.). Details can include the route details as well as the routes’ characteristics and the measures that are associated with them. The characteristics of selected routes can also be indicated on the map for visual instructions 110. You can specify the characteristics of the route by non-limiting examples: hands-free, eyes-offroad, sleeping or reclining, forward sitting, manual, safety and travel time.

“In FIG. “In FIG. Each attribute A1, A2, and A4 are associated with the respective measures M1, M2, and M3 for route R1. The measure M1 can refer to time in minutes, but it is possible for the measure M2 to refer to distance in miles. You can display the longest blocks LB1, LB2, LB3 or LB4 of a route (e.g. route R1), in which the attributes A1, A2, A3 or A4 can be performed continuously. If a route is divided into several segments, a longest block is the longest segment. This is where an attribute is performed continuously. The attribute A1 for route R1 could be, for example, the driver’s sleeping or hands-free state. It can also be measured in time or distance. 22 minutes (or 22 miles) can be the measure M1 that corresponds to the attribute A1. The length of the longest block LB1 in the driver’s sleeping state can be 5 minutes (or 5 miles). This corresponds to about 22% of M1. Attribut A2 can represent the driver’s eyes-off-road status 110 in terms of time or distance. The attribute A2’s measure M2 can be measured in 20 minutes (or 20 miles). The length of the longest block LB2 in the eyes-off road state can be between 15 minutes and 15 miles. Attribut A3 can represent the sleeping state of the driver 110 in terms of time or distance. The attribute A3 measure M3 can be measured in 10 minutes (or 10 miles). The length of the longest block LB3 in the driver’s sleeping state can be up to 4 minutes (or 4 miles). Attribut A4 can represent the manual state 110 of the driver, measured in distance (or time). The attribute A4’s measure M4 can be 5 minutes (or 5 miles). The length of the longest block LB4 in the manual state can be 4 minutes (or 4 miles). Additional attributes, such as travel time (e.g. Additional attributes such as the travel time (e.g. 25 minutes), the travel distance (20 miles), the reclining, and the seat forward can also be displayed. For the route R1, you can see it.

“Another example of the characteristics of route R2 is: Attribut A1 (e.g. the asleep or hands-free state) can have the measurement M5 (e.g. time in minutes) which is 20 minutes with a longest block LB5 that lasts 15 mins. The attribute A2 can have the M6 (e.g. time in minutes) of 7 minutes with a longest block LB6 duration of 7 minutes. Attribut A3 (e.g. the sleeping state) can have M7 (e.g. time in minutes) for 7 mins with a longest LB7 block of 7 mins. The attribute A4 (e.g. the manual state), can have the M8 (e.g. time in minutes) measure of 5 mins with a longest LB8 block of 5 mins. You can also see travel time (e.g. 22 minutes), travel distance (e.g. 15 miles), reclining and seat forward, etc. For the route R2, you can display these information.

“The characteristics of route R3 can also be described as follows. Attribut A1 (e.g. the asleep or hands-free state) can have the M9 (e.g. time in minutes) of 7 minutes with a longest block, LB9, of 7 minutes. The attribute A2 can have the M10 (e.g. time in minutes) of 0 with a longest block LB10 duration of 0 minutes. The attribute A3 (e.g. the sleeping state) can have M11 (e.g. time in minutes) that is 0 mins long with a block LB11 of 0 minutes. The attribute A4 (e.g. the manual state), can have the measure of M12 (e.g. time in minutes) of 12mins with a longest LB12 block of 12mins. Additional information can include travel time (e.g. 19 minutes), travel distance (e.g. 14 miles), reclining and seat forward, etc. For the route R3, you can display these information.

The driver can then compare the travel times, measures and longest blocks related to attribute (i.e. sleeping time) for each route R1, R2, or R3. Drivers can choose route R2, as the longest block of sleeping time LB5 is 14 minutes out of 20 (the measureM5). However, the total sleep time (the M1) for route R1 is 22 minutes, which is longer than the M5. The attribute’s longest block LB1 is 5 minutes. This is because the route R1 can include several discontinuous segments that are less than or equal 5 minutes long, which allows the driver to occupy the sleeping position. The longest block LB5 for route R2 is 14 minutes. This is because the route R2 may only include two segments that are 14 minutes and 6 minutes, where the driver can occupy the sleep state.

The characteristics of the route, e.g. the attributes A1-A4 (route R1), can be used to further divide the route (route R1) into multiple segments. A route segment is a continuous sequence or road segments that allow the driver to take a certain number of driving states. Each segment can be identified by markers or other identifiers based on a specific color scheme. The route R1 is an example of a route that shows a map showing directions from A to B. It includes a first route segment 302, a second route segment 323 and a third route section 305. For example, the first route segment 301 could include three segments of road on the same road, or three roads. The first segment 301 can be marked with a green color, indicating that the driver 110 is allowed to occupy the sleeping or hands-free state. The second segment of route 303 can also be marked in green, indicating that the driver 110 can only occupy the manual state. The third segment of route 305 can also be marked in yellow, indicating that the driver 110 is allowed to occupy the hands-free state, eyes-offroad state, and sleeping state.

The route R2 also illustrates a view of directions (on a chart) from A to B. It also includes a fourth route segment 311, an fifth route segment 313, a sixth route section 315. The fourth route segment 311 is marked in green. The fifth route segment 313 may be marked with red. The sixth route segment 315 may be marked with yellow.

The route R3 is illustrated with a summary of directions (on a chart) from A to B. It also includes a seventh route segment 321, 323, and 323. The driver 110 can only occupy the manual state by marking the seventh route segment 321 in red. The eighth segment of route 323 can also be marked in yellow, indicating that the driver 110 is allowed to occupy both the hands-free state and the eyes-offroad state.

The identifying markers can be placed next to the respective attributes A1-A4. This provides visual guidance and allows the driver 110 to compare different routes (e.g. route R1 versus route 3). You can also place the identifying markers next to each attribute A1-A4.

“Some aspects of the route can also be highlighted, especially if the autonomous vehicle has access maps that provide detailed information. This allows the autonomous vehicle the ability to enter the autonomous mode. Some areas may have detailed maps of streets stored in the database. However, some areas may not have a detailed map. The navigation apparatus 10 can highlight certain sections of the map along the route with a semitransparent background color. For example, autonomous areas can be marked in faint yellow while non-autonomous areas can be marked blue.

“FIG. “FIG. The third screen 400 displays real time directions and transition messages at various transition points while the autonomous vehicle operates or is moving. The transition point is the location where the driver 110 can change from one state to another. A transition point is often located between two segments of a route. There are generally the same number segments as transition points. A transition message tells a driver that he or she should be in a certain state, starting at the transition point. An appended message that displays the characteristics of the route between a first and next transition point can be included in a transition message. The details of the appended messages can be ordered according to the attributes, such as A1-A4. The server 20 can determine the transition points, transition messages, and appended messages or the controller of the navigation apparatus 10.

“In FIG. FIG. 2C shows a third screen 400 that illustrates a real time route 401 (e.g. the route R1 chosen by the driver 110), starting at the start point A. This can be the first or second transition point. The first transition message MSG1 indicates?Disengage Autonomous mode?. Or?Manual? The state can be displayed. An appended message MSG11 can include the first MSG1 transition message. Additional details can be added to it. The first appended message MSG11 displays A4 and A1 respectively with 10 and 2 minutes, respectively. The first message MSG1 is an indication that the driver 110 can be in the manual state (represented by attribute A4 for one embodiment) and hands-free state (represented by attribute A1 for one embodiment) for 10 minutes. A second message MSG2 indicates that the driver 110 has entered autonomous mode. Displays can also be made. The second message MSG2 may indicate?Hands-free?, ‘eyes-offroad?, or other information. An additional message MSG21 can be added to the second message MSG2. The second appended message MSG21 lists the attributes A1, MSG2, A3 and 4 with the following measures: 10 mins to 5 mins to 5 mins to 5 mins and 1 minute respectively. MSG21, the second appended message, indicates that the driver 110 is able to occupy the hands-free state (represented by the attribute A1 of one embodiment). This can last for 10 minutes until a transition point.

“In one embodiment, both the transition message or the appended message may be displayed together. Another embodiment allows the third screen 400 to be set up to display the appended messages upon activation of the transition messaging by tapping. A person skilled in the art will recognize that the configurations of various messages discussed in different embodiments are not limited examples and can be modified in accordance with the scope of this disclosure.

The navigation apparatus 10 allows live updating of allowed driver states, real-time traffic data, and suggestion rerouting. Live updating can be done for either one or more routes on the second screen 300, or the route on the screen 400. If the driver is starting his/her commute, sleeping might be allowed along a route. However, in the event of a road collision such as a truck jackknife the traffic can be rerouted. The navigation apparatus 10 can also indicate whether to switch to manual mode or to re-plan routes in order to avoid the truckjackknife. The autonomous vehicle has sensors that can confirm in real time that the allowed state is correct.

The first screen 200, second screen 300 and third screen 400 may all be displayed on the same display. One or more of the screens 200, 300, 400, or 300 can be displayed on the same screen. They may also appear sequentially on different displays, or on multiple screens.

“FIG. “FIG. Once at least one route is established from the starting point A to the destination, the server 20 sends a signal to the navigation apparatus 10. Step S301 is where the navigation apparatus 10 receives the routes, e.g. routes R1 to R4, and the characteristics of those routes (e.g. the attributes A1?A4, the measures and lengths M1?M12 and the longest blocks LB1?LB12).

“In step S303 the navigation apparatus 10 displays routes (e.g. routes R1-R4) on screen 200 together with the characteristics of each route and the transition points. To compare the characteristics of selected routes (e.g. routes R1, R2, or R3), the user can choose one or more routes (e.g. routes R1, R2, or R3). On the second screen 300, the routes selected and their characteristics (e.g. the attributes A1 to A4, measures M1?M12, and longest blocks LB1?LB12) are displayed. The second screen 300 also displays the overview of directions highlighted and identifiers. Refer to FIG. 2B. 2B. After activating a route, such as route R1, the third screen 400 will be displayed. Here, real-time instructions and directions are provided to the user. Real-time instructions and directions can include accessing and processing (via controller) sensor data such a global sensor, motion sensor or accelerometer.

“In step S305, the transition message is generated to change from the first expected state to the second expected state. You can generate one of the following transition messages: MSG1 is the first message indicating the expected first state. Disengage Autonomous Mode is the second. The second message MSG2 is the transition message with the second expected state?Autonomous Mode?. This topic will be discussed in relation to FIG. 2C. 2C. 2B). 2B). An example of this is a straight interstate highway with little traffic. An expected state can be either manual or hands-free for a winding road or road with heavy traffic. In step S305, an attached message can be generated. Refer to FIG. 2C.”

“In step S307 the transition messages are displayed on the third screen 400 while the vehicle is driving along a segment of route. When the autonomous vehicle reaches the transition point C, for example, the second message MSG2 (transition message) is displayed on the third screen 400.

“Further in step S309, the autonomous vehicle can monitor the current driver status using sensors. FIG. 2 shows the different types of sensors. 1. Sensors to monitor driver status can include a position sensor to detect driver’s hand position on steering 105, a camera to detect eyes-offroad and/or a sensor to detect the seat’s reclining position, etc.

“In step 311 the navigation apparatus 10 checks whether the driver’s current state is comparable to the expected state (e.g. the manual state along the third segment 305) at the current position along the route (e.g. the route R1). In step S309, the system continues monitoring the driver’s state. If the driver’s current state is different from the expected state, an alert message may be generated in step S313. Under certain conditions, the alert message might not be generated. For example, if the driver 110 holds onto the steering 105 while the expected state is hands-free, the alert message will not be generated. An alert message could be displayed on the third screen 400 or an audio instruction. It can also be vibrated in the seat 111, the steering 105, etc.

“FIG. “FIG. The user selects one or several routes from the first screen 200 to begin the second display process. Step S401 shows the highlighted routes with identifiers that indicate the expected driver state along the route. As shown in FIG. 2, the segments 301,303, and 305 of route R1 are highlighted. 2B.”

“In step 403, the characteristics such as the travel time and the hands-free state, eyes-offroad state, sleeping state, manual state and travel distance are displayed. Below is an overview of each route R1, R2, R3, and R3.

“In step S405, road incidents can be identified and updated via the second screen 300. Road events could be accidents, repairs, bad weather conditions, and so on. Based on the data from the autonomous vehicle’s sensors, the server 20 and the controller of navigation apparatus 10, the road events can be identified. The data is transmitted to the second screen 300.

“In step 407, you can update the second screen with prompts regarding unprotected left turn information. Based on data from the autonomous vehicle’s sensors, the second screen can be updated with unprotected left turn information from either the server 20 or controller of the navigation apparatus 10. The driver can be alerted if the left turn is not protected.

“FIG. “FIG.5” is a flowchart of a third display procedure for the third screen 400, performed by the navigation device 10. This flow chart is an example embodiment of the present disclosure. When the user selects a route from the second screen 300 or 200, the third display process begins. Step S501 is when the navigation apparatus 10 begins tracking the current position for the autonomous vehicle. On the third screen 400, the current position can also be marked. Based on the GPS installed to the vehicle, the current position can also be determined. Step S503 compares the current position with the transition points along the route chosen by the user. As discussed in FIG. S305, the transition points are part of the characteristics that the server 20 determines for the route. 3. In step S505, the comparison is made to determine if the current position is within close proximity of a transition point (e.g. the second transition point B) along the route (e.g. the real-time route 411) chosen by the user. The third display process continues from step S501 if the current position is not within close proximity to the transition point (e.g. the transition point C).

“On the contrary, if the current location is within close proximity to the transition point (e.g. the second point C), a transition messages (e.g. the second message MSG2) are determined based upon the characteristics of route at the transition points, in step S507.”

Based on the characteristics determined by the server 20, step S509 determines the transition message and the expected state that driver 110 should be in. The transition message is generated in step S509 and sent to the third screen 400. As discussed in FIG. 2, the transition message, e.g. MSG2, can be generated visually as a instruction. 2C. The transition message can also be delivered in audio format. A message can also be added to the screen 400 and sent to it, as shown in FIG. 2C.”

“As live updating is possible during the driving process the permitted driver states can also be updated. The third screen 400 can be used to prompt for rerouting. In the case of a truck jackknife for example, the permissible state can be changed to manual from sleeping mode. The driver can also be alerted via audio, visual, vibrations, or the seat.

“FIG. “FIG. An example representation of the navigation apparatus is shown in the user device 600. The user device 600 could be a smartphone in certain embodiments. The skilled artisan will recognize that the features described herein can be modified to work on other devices, such as a tablet, a computer, a server or an e-reader. An antenna 601 is connected to the exemplary user device 600. It includes a controller (610) and a wireless communication circuitry 602 (602) An audio processing circuitry 603 connects a speaker 604 to a microphone 605.

“The controller610 executes a program to perform the functions and processes described with respect to FIGS. 2, 4, 5, 6, 2B, 2C and 3. 2A, 2B, 2C, 3, 4, and 5. These functions may be performed by the controller 610 using instructions stored in memory 650. The processes shown in FIGS. The processes illustrated in FIGS. 3-5 can be stored in memory 650 and executed using user inputs via the first screen 200 and second screen 300. The functions can also be executed on an external device, such as the server 20, which can be accessed via a network or a non-transitory computer-readable medium.

“The memory number 650 can include Read Only Memory (ROM), Random Access Memory(RAM), and a memory array that includes a mixture of volatile memory units and non-volatile ones. The controller 610 may use the memory 650 as working memory while the algorithms and processes of the present disclosure are being executed. The memory 650 can also be used to store long-term data, such as image data or information related thereto. The memory 650 can be used to store battle view information, operation views information, and a list of commands.

“The user device 600 has a control line CL as well as a data line DL, which are internal communication bus lines. The control line CL can be used to transmit control data from/to the controller 610. Data line DL can be used to transmit voice data and display data.

“The antenna 601 transmits/receives radio waves between base stations to perform radio-based communication such as cellular telephone communication. Wireless communication processing circuitry 602 controls communication between the user device 600, and other devices such as the antenna 601. The wireless communication processing circuitry 602 can control communication between base stations used for cellular phone communications.

The speaker 604 emits audio signals that correspond to the audio data from the voice processing circuitry 603. The microphone 605 listens for audio in the surrounding environment and converts it into an audio signal. The microphone 605 can then convert the audio signal into an audio signal and output it to the voice processing circuitry 603 which will perform further processing. The voice processing circuitry 603 can demodulate and/or decode audio data from memory 650, or audio data received via the wireless communication processing system 602 and/or the short-distance wireless communications processing circuitry 607. The voice processing circuitry 603 can also decode audio signals received by the microphone 605.

The exemplary user device 600 could also contain a display 620 and a touch panel (630), an operation key (640) and a short distance communication processing circuitry 607 that is connected to an antenna 606. The Liquid Crystal Display (LCD), organic electroluminescence panel display panel or another type of display screen technology could be used for the display 620. The display 620 can display still and moving images data as well as operational inputs, such the buttons 201-204 that control the user device 600. A GUI with multiple screens may be displayed on the display 620, as illustrated in FIGS. 2A-2C allows a user to control various aspects of the 600 and/or other devices. The display 620 can display images and characters (e.g. an overview of FIGS. 2B and 2C are received by the 600 user device and/or stored on the memory 650. They can also be accessed via an external device, such as a camera, from an external network device like a router. The user device 600 could, for example, access the Internet to display text and/or images sent from a Web server.

The touch panel 630 could include a touch panel driver and a physical touch panel display screen. One or more touch sensors may be included in the touch panel 630 to detect an input operation on the touch panel display screen’s operation surface. Touch panel 630 can also detect a touch shape or a touch area. The phrase “touch operation” is used herein. The term “touch operation” refers to input operations that are performed by touching the touch panel display’s operation surface with an instruction object such as a thumb, finger, or stylus-type device. If a stylus, or something similar is used in touch operations, the stylus might include a conductive material at the tip of its stylus so that the touch panel 630 sensors can detect when the stylus touches the touch panel display’s operation surface (similar to the case where a finger is used).

The touch panel 630 can be placed adjacent to the display (e.g. laminated), or integrally with it. The present disclosure assumes that the touch panel (630) is integrally formed with the display 620. Therefore, examples discussed in this document may refer to touch operations performed on the display 620’s surface rather than the touch panel (630). The skilled artisan will know that this is not an exhaustive list.

The present disclosure assumes that the touch panel 630 uses capacitance-type touch panels technology. It should be noted that the present disclosure can be easily applied to other types of touch panel technology (e.g. resistance-type touch panels) using alternative structures. The touch panel 630 may contain transparent electrode touch sensors that are arranged in an X-Y orientation on transparent sensor glass.

The touch panel driver can be integrated in the touch panel630 to control processing related the touch panel630 such as scanning control. The touch panel driver might scan each sensor in an electrode pattern with electrostatic capacitance transparent in the X and Y directions and determine the electrostatic capacitance value to determine the time a touch operation is being performed. The touch panel driver might output a coordinate for each sensor and the corresponding electrostatic capacitance values. A sensor identifier may also be output by the touch panel driver. This can be used to map to a coordinate on a touch panel display screen. The touch panel driver and sensors can also detect when an instruction object (such as a finger) is within a predetermined distance of an operation surface on the touch panel display panel screen. The instruction object doesn’t necessarily have to touch the touch panel display screen in order for touch sensors to detect it and perform the processing described herein. In certain embodiments, touch panel 630 can detect the position of a user?s finger around the edge of the display panel (e.g. gripping a protective cover that surrounds the display/touchpanel). The touch panel driver may transmit signals, e.g. In response to touch operation detection, or a query from another element using timed data exchange, etc.

The protective casing may surround the touch panel 630 or the display 620, and may also contain other elements of the user device 600. The touch panel 630 sensors may detect the position of the user?s fingers on the protective case (but not directly on display 620). The controller 610 can perform the display control processing described in this document based on the position of the user?s fingers gripping the casing. An interface element may be moved (e.g. to a closer location to one or more fingers) depending on the finger position.

“Further in some embodiments, the controller610 may be programmed to determine which hand is holding user device 600 based on finger position. The touch panel 630 sensors might detect multiple fingers on the left hand side of user device 600 (e.g. on the edge of display 620 or the protective casing) and only one finger on the right. The controller 610 might detect that the user 600 is being worn with his/her right arm. This is because the grip pattern detected corresponds to the expected pattern when the 600 is only being worn with the right hand.

“The operation key 64 may contain one or more buttons (e.g. the route button 201-204) and similar external control elements (e.g. the button inputs 101 in FIG. 1), which can generate an operation signal based upon a user input. These operation signals, in addition to those generated by the touch panel 630 may also be sent to the controller 610 to perform related processing and control. The controller 610 may perform the processing or functions associated with external buttons and similar items according to certain aspects of this disclosure. This is in response to inputs on the touch panel display screens 630. This allows external buttons 600 to be removed from the user device 600 and can instead perform touch operations. This improves water-tightness.

“The antenna 606 can transmit/receive electromagnetic waves to/from external apparatuses. The short-distance wireless communications processing circuitry 607 can control wireless communication between external apparatuses. Bluetooth, IEEE 802.11, NFC and near-field communication are all examples of wireless communication protocols that can be used to communicate between devices via the short distance wireless communication processing circuitry 607.”

“The user device 600 could include a motion sensor 608. The user device 600 may include a motion sensor 608 that can detect features of movement (i.e. one or more movements). The motion sensor 608 could include an accelerometer to detect acceleration or a gyroscope for detecting angular velocity. It may also include a geomagnetic sensor that detects direction and a geolocation sensor to locate location. The motion sensor 608 can generate a signal that contains data about the detected motion. The motion sensor 608 can detect a variety of distinct movements within a motion, such as the start and stop of a series of movements, or within a predetermined time period. The user device 600 may also detect a variety of physical shocks (e.g., hitting, jarring, etc.), speed and/or acceleration (instantaneous or temporal), and other motion features. The generated detection signal may include the detected motion features. The detected motion features may be sent to the controller 610. Further processing can be done based on the data in the detection signal. The motion sensor 608 may be used in conjunction with the Global Positioning System (GPS), circuitry 660.

“The camera circuitry 609 may be included in the 600 user device. It includes a shutter and lens for taking photographs of the environment 600 around it. The camera circuitry 609 can capture the surroundings on the opposite side of 600 user device from the user. Display panel 620 can display the images from the captured photos. The memory circuitry stores the captured photos. The memory circuitry can be found within the camera circuitry 609, or may be included in the memory 650. Camera circuitry 609 may be an additional feature that is attached to the user device 600, or it can be built into the camera feature. The camera circuitry 609 is capable of detecting features of motion, i.e. one or more movements, of the user device 600.

The software application running on the user device 600 requests data processing from the server via a wireless network. The server 20 contains a storage controller to manage the database on a hard disk and a query manager to execute SQL (structured queries language) statements against the data on the disk or database. The query manager 650 implements various processing functions, such as query syntax analysis, optimization and execution plan generation. It also provides a simple network communication function that sends and receives signal from a controller.

“In the above description, processes, descriptions, or blocks in flowcharts must be understood to represent modules, segments, or portions of code that include one or more executable instruction for implementing specific logic functions or steps in a process. Alternate implementations are included in the scope of the exemplary embodiments. Functions can be executed in any order other than the one shown or discussed. This would depend upon the functionality involved. As skilled persons in the art would understand.

While certain embodiments have already been described, they are only examples and do not limit the scope of disclosures. The novel methods, apparatuses, and systems described in this document can be implemented in many other forms. Furthermore, it is possible to omit, substitute, or change the form of these methods, apparatuses, and systems without departing from its spirit. These claims and equivalents cover any form or modification that would be within the scope of the present disclosures. This technology could be used for cloud computing, where a single function can be shared and processed by multiple apparatuses over a network.

Summary for “Apparatus & method for transitioning between driving states during navigation of highly automated vechicle”

“Field of the Disclosure.”

“This invention relates to improvements in highly-automated or autonomous vehicles. The present disclosure is more specific and relates to the application of driver preferences or historical driving performance to determine an optimal route for highly automated autonomous vehicles.

“Description of Related Art”

“A conventional navigation system allows a driver to input a destination address. The navigation system then determines the direction to that destination using the map available to it. To determine the current vehicle position, the conventional navigation system has a global positioning sensor (GPS). The conventional navigation system uses audible or visual instructions to guide the driver to their destination based on the vehicle’s current position. The conventional navigation system chooses which route to give the driver, as there are many routes between the current position (and the destination). The route is often chosen by taking into account factors such as travel distance and travel time that are important for non-autonomous driving. Some navigation systems incorporate traffic and road event information, such as repairs or accidents, into the directions. This allows the driver to choose a more congested route.

Although they are useful and appropriate for non-autonomous vehicles, the conventional navigations are not adaptable and don’t account for factors that might be relevant for autonomous driving. For example, the traditional navigation system routing might not be able account for hands-free time or sleep time which are some advantages of the autonomous vehicle over a non-autonomous one during route selection and determination.

Autonomous vehicles are the next generation of automobiles with a higher level of functionality and improved vehicle performance. Autonomous vehicles not only improve driving performance and overall vehicle efficiency, but also allow drivers to sleep, take their hands off of the wheel, and look at the road. In certain circumstances. Smart components, including smart sensors and communication with other vehicles, are required to enable these increased capabilities. A more efficient routing system or navigation system is essential to enable the vehicle to have greater capabilities. Conventional navigation systems are limited in capabilities and can be restricted. There is a constant need for improved navigation systems to be used by autonomous vehicles.

A navigation apparatus is described in an embodiment of the invention. A navigation apparatus for autonomous vehicles includes circuitry that can receive via a network at least one path between a starting location and a destination. The first screen allows you to select a first set or routes. Each characteristic associated with the route is associated with a measure, and the longest block within which each characteristic may be performed continuously. The circuitry can also be configured to divide the route of the at most one route to create a plurality based on the plurality characteristics. A first segment is highlighted using a first identifier that corresponds to a first characteristic. A second segment is highlighted using a second identifyr that corresponds to a third characteristic of each of the plurality. Finally, display the first set routes, the plurality characteristics corresponding the first set, and the measure and longest block corresponding the plurality on a secondary screen.

“A method for navigation of an autonomous car is also provided according to an embodiment in the present disclosure. The method involves receiving via a network at least one path between a starting location and a destination, and displaying that route on a screen that allows selection from a first set route. Each characteristic of the plurality is associated with a measure or a block within which each characteristic may be performed continuously. A method also involves dividing a route from the at least one route using the processing logic to create a plurality based on the plurality characteristics. A first segment is highlighted using a first identifier that corresponds to a first characteristic in the plurality. A second segment is highlighted using a second identifyr that corresponds to a third characteristic of each of the plurality. Finally, the processing circuitry displays the first set routes, the plurality corresponding the first set, as well as the measure and longest block

“Further,” according to the embodiment of the disclosure, there is a non-transitory computer readable medium that stores a program that, when executed by the computer, causes it to perform the method of navigation for an autonomous vehicle as discussed above.

“The above general description of the illustrated implementations and the detailed description thereof are only exemplary aspects of this disclosure and are not intended to be restrictive.”

“The following description is in conjunction with the attached drawings is intended to describe various embodiments of disclosed subject matter. It is not intended that it represents the only embodiment. The description may contain specific details in certain cases to help you understand the disclosed embodiments. It will be obvious to those who are skilled in the arts that the disclosed embodiments may be used without these specific details. Sometimes, well-known components and structures may be displayed in block diagrams to help avoid obscure the concepts of the disclosed subject matter.

“Refer throughout the specification only to?one embodiment?” or ?an embodiment? It means that at least one embodiment contains a particular structure, feature, or characteristic. The phrase ‘in one embodiment? Or?in one embodiment? The same embodiment may be mentioned in different places in the specification. In one or more embodiments, particular features, structures, or characteristics can be combined in any way that is most appropriate. It is also intended that embodiments of disclosed subject matter include modifications and variations.

“It is important to note that the singular forms?a and?an in the specification and the attached claims are used. ?an,? ?an,? If the context requires otherwise, plural referents are included. This is, unless the context expressly dictates otherwise. ?an,? ?the,? The meanings of the words?the,? and the like are?one or more. It is also important to understand that terms like?left?,?, and?right? are not interchangeable. ?right,? These terms and others that could be used herein are merely examples and may not limit the scope of embodiments of this disclosure to any particular configuration or orientation. Additionally, terms like?first?,? ?second,? ?third,? ?third,?

“Furthermore the terms?approximately? ?proximate,? ?minor,? These terms and others generally refer to ranges that include the identified values within a margin between 20%, 10%, or preferably 5% in some embodiments and any other values therebetween.

“FIG. “FIG. 1” illustrates a navigation device for an autonomous vehicle in accordance with an exemplary embodiment. The navigation apparatus 10 is part of the autonomous vehicle. There are also various electronic and mechanical parts. Although certain aspects of this disclosure are useful for specific vehicles, others may be applicable to any vehicle, including cars, trucks and motorcycles, buses, boats, planes, helicopters or lawnmowers. An autonomous vehicle could be a fully autonomous vehicle or semi-autonomous vehicle. It can also include vehicles with advanced driver assistance systems (ADAS), such as adaptive cruise control, lane departure alert, and vehicle equipped with lane departure warning and lane departure alert. The autonomous vehicle can also include the following features: a steering device such as steering wheel 101; a navigation apparatus 10, with a touch screen 102; and a driving mode selector system such as a modeshifter 115. You may also find various input devices in the vehicle, including a mode switch 115, touch screen 102, or button inputs103 that allow you to activate or deactivate one or more autonomous driving modes. Also, it can be used to enable a driver 120 (or passenger) to give information, such as a navigation destination, to the navigation apparatus 10. One embodiment of the invention allows the navigation apparatus 10 to be integrated into the dashboard of an autonomous vehicle. The navigation apparatus 10 may also be attached to the dashboard or windshield of an autonomous vehicle.

The autonomous vehicle may optionally have more than one display. The vehicle can include a second display (113) to show information about the status of the autonomous car, navigation status information from the navigation apparatus 10, and other vehicle status information from a computer, such as an ECU (electronic control unit) installed on the vehicle. In the current example, the second display 113 displays a driving mode (?D?) and a speed (?20?). The second display 113 displays a driving mode?D? and a speed of?20? The vehicle is currently in drive mode and moving at 20 mph. If the drive switches to autonomous mode, the second display (113) can show the driving mode as “AV”. The vehicle is now in autonomous mode. Other modes include a hands-free mode?HF?”, an eyes-off road mode??EOR?, and autopilot.

“In one embodiment, navigation apparatus 10 can communicate with other components of the vehicle like the vehicle’s traditional ECU (not illustrated) and can send or receive information from different systems of the autonomous car, such as a brake pedal107, an accelerator109, and the steering wheels 105 to control movement, speed, etc.

The autonomous vehicle may also include a GPS receiver that determines the device’s latitude and longitude. An accelerometer, a GPS receiver, or other direction/speed detection devices can be used to determine the vehicle’s speed and changes. The vehicle may also have components that detect objects and conditions outside of the vehicle, such as vehicles, other vehicles, obstructions, traffic signals, signs and trees, etc. Lasers, sonar, radar detection unit (such as those used to adaptive cruise control), cameras or any other detection device that records data and sends signal to the ECU may be part of the detection system. The autonomous vehicle may also be equipped with a DSRC sensor and an AV penetration sensor. These sensors allow for detection of autonomous vehicles within their range and enable communication with them. The AV penetration sensor may include LIDAR (light detection and ranging) which provides distance or range information, and a stereo camera that allows object recognition.

The vehicle can use the sensors to learn and possibly respond to its surroundings to ensure safety for both passengers and objects. The vehicle type, number, type, sensor locations, sensor fields, sensor fields, and sensor fields are just examples. Other configurations are also possible.

In one embodiment, the sensors described can receive input from sensors on non-autonomous vehicles. These sensors could include sensors such as tire pressure sensors and engine temperature sensors. They also may include brake heat sensors, brake pads status sensors, fuel quality sensors, oil level sensors, oil quality sensors, and air quality sensors.

“The server 20, ECU, or the navigation apparatus 10, can transmit or receive information from or to other computers. A map that is stored on the server 20, for example, can be transferred to the navigation apparatus 10. Sensor data from the autonomous vehicle can also be sent to the server to be processed. Data from other vehicles can also be stored in the server 20 database and can be used to navigate. The server 20 can store sensor information and other navigation-related functions (described in FIGS. 2A-2C, 3-5 can be implemented by a central architecture. This means that the information from the sensors and navigation related functions (described with reference to FIGS. Alternately, you can use a distributed architecture, where the navigation-related functions are processed partly by the server 20, and partially by navigation apparatus 10.

The navigation apparatus 10 is a human-machine interface (HMI), which allows the user to see and select different routes starting at a specific location. The characteristics of a route are the states that the driver 110 can be in while the autonomous vehicle moves and other factors specific for the route. The characteristics of a route could include driver states like hands-free, eyes-off road, sleeping, reclining and forward seating, manual, eyes needed to be on a dedicated display, safety, travel time, distance, etc. The clonable screen refers to a display that the driver can use to copy the route onto an external device, such as a smartphone or tablet. Because the external device can be placed high up on the dashboard, it is possible to know more about the timing of reengagement once a warning has been issued. There may be areas where the eyes of the external device are allowed but not the eyes of the road. HMIs can contain a software program that is executed on the controller of the navigation apparatus 10. This allows for multiple screens to be displayed. The HMI can also include user inputs and results. Three screens are shown in various embodiments of this disclosure to illustrate various aspects of the disclosure.

The navigation apparatus 10 may also determine the route characteristics, which can allow the autonomous vehicle to operate in a teammate mode. The autonomous vehicle can perform certain operations, such as steering, acceleration and so on, with no driver input, along sections of the route that involve winding roads, severe weather conditions, driving for pleasure or driving in harsh traffic conditions. The autonomous vehicle can be handed over to the driver.

“FIG. 2A shows a first screen 200 for the navigation apparatus 10, according to an exemplary embodiment. The navigation apparatus 10’s first screen 200 displays at least one route button, such as the route buttons 201 to 202,203 and 204 that correspond to R1, R2, and R3. R4 respectively. A summary of the route can be displayed by the buttons 201-204 using parameters like travel time, hands-free time, scenery, and a map showing the route. The server 20 can determine the routes R1-R4. It implements routing algorithms, route optimization algorithm, and other navigation algorithms that are appropriate for the autonomous car. The first screen 200 may include a setting button 215. This allows the user to select route preferences, display summary parameters 202-204, store current routes, add favorites, display messages at specific points of interest, and so on. The server 20 can also take the above settings into consideration when determining routes.

To assist users in selecting a route, the route buttons 201-204 may be activated. A second screen 300 can display the characteristics of each route in one embodiment. (Further discussed with reference to FIG. 2B) to allow comparisons between different routes. Alternately, or in addition to the above, the user can select one or more routes (e.g. routes R1 or R4) based on the summary characteristics of the routes (e.g. routes R1 or R4). A route, such as R1, can be activated in certain embodiments by double-tapping the button 201 or pressing and holding the button 201.

“FIG. “FIG. The second screen 300 shows details about the selected routes (e.g. R1, R2, R3, etc.). Details can include the route details as well as the routes’ characteristics and the measures that are associated with them. The characteristics of selected routes can also be indicated on the map for visual instructions 110. You can specify the characteristics of the route by non-limiting examples: hands-free, eyes-offroad, sleeping or reclining, forward sitting, manual, safety and travel time.

“In FIG. “In FIG. Each attribute A1, A2, and A4 are associated with the respective measures M1, M2, and M3 for route R1. The measure M1 can refer to time in minutes, but it is possible for the measure M2 to refer to distance in miles. You can display the longest blocks LB1, LB2, LB3 or LB4 of a route (e.g. route R1), in which the attributes A1, A2, A3 or A4 can be performed continuously. If a route is divided into several segments, a longest block is the longest segment. This is where an attribute is performed continuously. The attribute A1 for route R1 could be, for example, the driver’s sleeping or hands-free state. It can also be measured in time or distance. 22 minutes (or 22 miles) can be the measure M1 that corresponds to the attribute A1. The length of the longest block LB1 in the driver’s sleeping state can be 5 minutes (or 5 miles). This corresponds to about 22% of M1. Attribut A2 can represent the driver’s eyes-off-road status 110 in terms of time or distance. The attribute A2’s measure M2 can be measured in 20 minutes (or 20 miles). The length of the longest block LB2 in the eyes-off road state can be between 15 minutes and 15 miles. Attribut A3 can represent the sleeping state of the driver 110 in terms of time or distance. The attribute A3 measure M3 can be measured in 10 minutes (or 10 miles). The length of the longest block LB3 in the driver’s sleeping state can be up to 4 minutes (or 4 miles). Attribut A4 can represent the manual state 110 of the driver, measured in distance (or time). The attribute A4’s measure M4 can be 5 minutes (or 5 miles). The length of the longest block LB4 in the manual state can be 4 minutes (or 4 miles). Additional attributes, such as travel time (e.g. Additional attributes such as the travel time (e.g. 25 minutes), the travel distance (20 miles), the reclining, and the seat forward can also be displayed. For the route R1, you can see it.

“Another example of the characteristics of route R2 is: Attribut A1 (e.g. the asleep or hands-free state) can have the measurement M5 (e.g. time in minutes) which is 20 minutes with a longest block LB5 that lasts 15 mins. The attribute A2 can have the M6 (e.g. time in minutes) of 7 minutes with a longest block LB6 duration of 7 minutes. Attribut A3 (e.g. the sleeping state) can have M7 (e.g. time in minutes) for 7 mins with a longest LB7 block of 7 mins. The attribute A4 (e.g. the manual state), can have the M8 (e.g. time in minutes) measure of 5 mins with a longest LB8 block of 5 mins. You can also see travel time (e.g. 22 minutes), travel distance (e.g. 15 miles), reclining and seat forward, etc. For the route R2, you can display these information.

“The characteristics of route R3 can also be described as follows. Attribut A1 (e.g. the asleep or hands-free state) can have the M9 (e.g. time in minutes) of 7 minutes with a longest block, LB9, of 7 minutes. The attribute A2 can have the M10 (e.g. time in minutes) of 0 with a longest block LB10 duration of 0 minutes. The attribute A3 (e.g. the sleeping state) can have M11 (e.g. time in minutes) that is 0 mins long with a block LB11 of 0 minutes. The attribute A4 (e.g. the manual state), can have the measure of M12 (e.g. time in minutes) of 12mins with a longest LB12 block of 12mins. Additional information can include travel time (e.g. 19 minutes), travel distance (e.g. 14 miles), reclining and seat forward, etc. For the route R3, you can display these information.

The driver can then compare the travel times, measures and longest blocks related to attribute (i.e. sleeping time) for each route R1, R2, or R3. Drivers can choose route R2, as the longest block of sleeping time LB5 is 14 minutes out of 20 (the measureM5). However, the total sleep time (the M1) for route R1 is 22 minutes, which is longer than the M5. The attribute’s longest block LB1 is 5 minutes. This is because the route R1 can include several discontinuous segments that are less than or equal 5 minutes long, which allows the driver to occupy the sleeping position. The longest block LB5 for route R2 is 14 minutes. This is because the route R2 may only include two segments that are 14 minutes and 6 minutes, where the driver can occupy the sleep state.

The characteristics of the route, e.g. the attributes A1-A4 (route R1), can be used to further divide the route (route R1) into multiple segments. A route segment is a continuous sequence or road segments that allow the driver to take a certain number of driving states. Each segment can be identified by markers or other identifiers based on a specific color scheme. The route R1 is an example of a route that shows a map showing directions from A to B. It includes a first route segment 302, a second route segment 323 and a third route section 305. For example, the first route segment 301 could include three segments of road on the same road, or three roads. The first segment 301 can be marked with a green color, indicating that the driver 110 is allowed to occupy the sleeping or hands-free state. The second segment of route 303 can also be marked in green, indicating that the driver 110 can only occupy the manual state. The third segment of route 305 can also be marked in yellow, indicating that the driver 110 is allowed to occupy the hands-free state, eyes-offroad state, and sleeping state.

The route R2 also illustrates a view of directions (on a chart) from A to B. It also includes a fourth route segment 311, an fifth route segment 313, a sixth route section 315. The fourth route segment 311 is marked in green. The fifth route segment 313 may be marked with red. The sixth route segment 315 may be marked with yellow.

The route R3 is illustrated with a summary of directions (on a chart) from A to B. It also includes a seventh route segment 321, 323, and 323. The driver 110 can only occupy the manual state by marking the seventh route segment 321 in red. The eighth segment of route 323 can also be marked in yellow, indicating that the driver 110 is allowed to occupy both the hands-free state and the eyes-offroad state.

The identifying markers can be placed next to the respective attributes A1-A4. This provides visual guidance and allows the driver 110 to compare different routes (e.g. route R1 versus route 3). You can also place the identifying markers next to each attribute A1-A4.

“Some aspects of the route can also be highlighted, especially if the autonomous vehicle has access maps that provide detailed information. This allows the autonomous vehicle the ability to enter the autonomous mode. Some areas may have detailed maps of streets stored in the database. However, some areas may not have a detailed map. The navigation apparatus 10 can highlight certain sections of the map along the route with a semitransparent background color. For example, autonomous areas can be marked in faint yellow while non-autonomous areas can be marked blue.

“FIG. “FIG. The third screen 400 displays real time directions and transition messages at various transition points while the autonomous vehicle operates or is moving. The transition point is the location where the driver 110 can change from one state to another. A transition point is often located between two segments of a route. There are generally the same number segments as transition points. A transition message tells a driver that he or she should be in a certain state, starting at the transition point. An appended message that displays the characteristics of the route between a first and next transition point can be included in a transition message. The details of the appended messages can be ordered according to the attributes, such as A1-A4. The server 20 can determine the transition points, transition messages, and appended messages or the controller of the navigation apparatus 10.

“In FIG. FIG. 2C shows a third screen 400 that illustrates a real time route 401 (e.g. the route R1 chosen by the driver 110), starting at the start point A. This can be the first or second transition point. The first transition message MSG1 indicates?Disengage Autonomous mode?. Or?Manual? The state can be displayed. An appended message MSG11 can include the first MSG1 transition message. Additional details can be added to it. The first appended message MSG11 displays A4 and A1 respectively with 10 and 2 minutes, respectively. The first message MSG1 is an indication that the driver 110 can be in the manual state (represented by attribute A4 for one embodiment) and hands-free state (represented by attribute A1 for one embodiment) for 10 minutes. A second message MSG2 indicates that the driver 110 has entered autonomous mode. Displays can also be made. The second message MSG2 may indicate?Hands-free?, ‘eyes-offroad?, or other information. An additional message MSG21 can be added to the second message MSG2. The second appended message MSG21 lists the attributes A1, MSG2, A3 and 4 with the following measures: 10 mins to 5 mins to 5 mins to 5 mins and 1 minute respectively. MSG21, the second appended message, indicates that the driver 110 is able to occupy the hands-free state (represented by the attribute A1 of one embodiment). This can last for 10 minutes until a transition point.

“In one embodiment, both the transition message or the appended message may be displayed together. Another embodiment allows the third screen 400 to be set up to display the appended messages upon activation of the transition messaging by tapping. A person skilled in the art will recognize that the configurations of various messages discussed in different embodiments are not limited examples and can be modified in accordance with the scope of this disclosure.

The navigation apparatus 10 allows live updating of allowed driver states, real-time traffic data, and suggestion rerouting. Live updating can be done for either one or more routes on the second screen 300, or the route on the screen 400. If the driver is starting his/her commute, sleeping might be allowed along a route. However, in the event of a road collision such as a truck jackknife the traffic can be rerouted. The navigation apparatus 10 can also indicate whether to switch to manual mode or to re-plan routes in order to avoid the truckjackknife. The autonomous vehicle has sensors that can confirm in real time that the allowed state is correct.

The first screen 200, second screen 300 and third screen 400 may all be displayed on the same display. One or more of the screens 200, 300, 400, or 300 can be displayed on the same screen. They may also appear sequentially on different displays, or on multiple screens.

“FIG. “FIG. Once at least one route is established from the starting point A to the destination, the server 20 sends a signal to the navigation apparatus 10. Step S301 is where the navigation apparatus 10 receives the routes, e.g. routes R1 to R4, and the characteristics of those routes (e.g. the attributes A1?A4, the measures and lengths M1?M12 and the longest blocks LB1?LB12).

“In step S303 the navigation apparatus 10 displays routes (e.g. routes R1-R4) on screen 200 together with the characteristics of each route and the transition points. To compare the characteristics of selected routes (e.g. routes R1, R2, or R3), the user can choose one or more routes (e.g. routes R1, R2, or R3). On the second screen 300, the routes selected and their characteristics (e.g. the attributes A1 to A4, measures M1?M12, and longest blocks LB1?LB12) are displayed. The second screen 300 also displays the overview of directions highlighted and identifiers. Refer to FIG. 2B. 2B. After activating a route, such as route R1, the third screen 400 will be displayed. Here, real-time instructions and directions are provided to the user. Real-time instructions and directions can include accessing and processing (via controller) sensor data such a global sensor, motion sensor or accelerometer.

“In step S305, the transition message is generated to change from the first expected state to the second expected state. You can generate one of the following transition messages: MSG1 is the first message indicating the expected first state. Disengage Autonomous Mode is the second. The second message MSG2 is the transition message with the second expected state?Autonomous Mode?. This topic will be discussed in relation to FIG. 2C. 2C. 2B). 2B). An example of this is a straight interstate highway with little traffic. An expected state can be either manual or hands-free for a winding road or road with heavy traffic. In step S305, an attached message can be generated. Refer to FIG. 2C.”

“In step S307 the transition messages are displayed on the third screen 400 while the vehicle is driving along a segment of route. When the autonomous vehicle reaches the transition point C, for example, the second message MSG2 (transition message) is displayed on the third screen 400.

“Further in step S309, the autonomous vehicle can monitor the current driver status using sensors. FIG. 2 shows the different types of sensors. 1. Sensors to monitor driver status can include a position sensor to detect driver’s hand position on steering 105, a camera to detect eyes-offroad and/or a sensor to detect the seat’s reclining position, etc.

“In step 311 the navigation apparatus 10 checks whether the driver’s current state is comparable to the expected state (e.g. the manual state along the third segment 305) at the current position along the route (e.g. the route R1). In step S309, the system continues monitoring the driver’s state. If the driver’s current state is different from the expected state, an alert message may be generated in step S313. Under certain conditions, the alert message might not be generated. For example, if the driver 110 holds onto the steering 105 while the expected state is hands-free, the alert message will not be generated. An alert message could be displayed on the third screen 400 or an audio instruction. It can also be vibrated in the seat 111, the steering 105, etc.

“FIG. “FIG. The user selects one or several routes from the first screen 200 to begin the second display process. Step S401 shows the highlighted routes with identifiers that indicate the expected driver state along the route. As shown in FIG. 2, the segments 301,303, and 305 of route R1 are highlighted. 2B.”

“In step 403, the characteristics such as the travel time and the hands-free state, eyes-offroad state, sleeping state, manual state and travel distance are displayed. Below is an overview of each route R1, R2, R3, and R3.

“In step S405, road incidents can be identified and updated via the second screen 300. Road events could be accidents, repairs, bad weather conditions, and so on. Based on the data from the autonomous vehicle’s sensors, the server 20 and the controller of navigation apparatus 10, the road events can be identified. The data is transmitted to the second screen 300.

“In step 407, you can update the second screen with prompts regarding unprotected left turn information. Based on data from the autonomous vehicle’s sensors, the second screen can be updated with unprotected left turn information from either the server 20 or controller of the navigation apparatus 10. The driver can be alerted if the left turn is not protected.

“FIG. “FIG.5” is a flowchart of a third display procedure for the third screen 400, performed by the navigation device 10. This flow chart is an example embodiment of the present disclosure. When the user selects a route from the second screen 300 or 200, the third display process begins. Step S501 is when the navigation apparatus 10 begins tracking the current position for the autonomous vehicle. On the third screen 400, the current position can also be marked. Based on the GPS installed to the vehicle, the current position can also be determined. Step S503 compares the current position with the transition points along the route chosen by the user. As discussed in FIG. S305, the transition points are part of the characteristics that the server 20 determines for the route. 3. In step S505, the comparison is made to determine if the current position is within close proximity of a transition point (e.g. the second transition point B) along the route (e.g. the real-time route 411) chosen by the user. The third display process continues from step S501 if the current position is not within close proximity to the transition point (e.g. the transition point C).

“On the contrary, if the current location is within close proximity to the transition point (e.g. the second point C), a transition messages (e.g. the second message MSG2) are determined based upon the characteristics of route at the transition points, in step S507.”

Based on the characteristics determined by the server 20, step S509 determines the transition message and the expected state that driver 110 should be in. The transition message is generated in step S509 and sent to the third screen 400. As discussed in FIG. 2, the transition message, e.g. MSG2, can be generated visually as a instruction. 2C. The transition message can also be delivered in audio format. A message can also be added to the screen 400 and sent to it, as shown in FIG. 2C.”

“As live updating is possible during the driving process the permitted driver states can also be updated. The third screen 400 can be used to prompt for rerouting. In the case of a truck jackknife for example, the permissible state can be changed to manual from sleeping mode. The driver can also be alerted via audio, visual, vibrations, or the seat.

“FIG. “FIG. An example representation of the navigation apparatus is shown in the user device 600. The user device 600 could be a smartphone in certain embodiments. The skilled artisan will recognize that the features described herein can be modified to work on other devices, such as a tablet, a computer, a server or an e-reader. An antenna 601 is connected to the exemplary user device 600. It includes a controller (610) and a wireless communication circuitry 602 (602) An audio processing circuitry 603 connects a speaker 604 to a microphone 605.

“The controller610 executes a program to perform the functions and processes described with respect to FIGS. 2, 4, 5, 6, 2B, 2C and 3. 2A, 2B, 2C, 3, 4, and 5. These functions may be performed by the controller 610 using instructions stored in memory 650. The processes shown in FIGS. The processes illustrated in FIGS. 3-5 can be stored in memory 650 and executed using user inputs via the first screen 200 and second screen 300. The functions can also be executed on an external device, such as the server 20, which can be accessed via a network or a non-transitory computer-readable medium.

“The memory number 650 can include Read Only Memory (ROM), Random Access Memory(RAM), and a memory array that includes a mixture of volatile memory units and non-volatile ones. The controller 610 may use the memory 650 as working memory while the algorithms and processes of the present disclosure are being executed. The memory 650 can also be used to store long-term data, such as image data or information related thereto. The memory 650 can be used to store battle view information, operation views information, and a list of commands.

“The user device 600 has a control line CL as well as a data line DL, which are internal communication bus lines. The control line CL can be used to transmit control data from/to the controller 610. Data line DL can be used to transmit voice data and display data.

“The antenna 601 transmits/receives radio waves between base stations to perform radio-based communication such as cellular telephone communication. Wireless communication processing circuitry 602 controls communication between the user device 600, and other devices such as the antenna 601. The wireless communication processing circuitry 602 can control communication between base stations used for cellular phone communications.

The speaker 604 emits audio signals that correspond to the audio data from the voice processing circuitry 603. The microphone 605 listens for audio in the surrounding environment and converts it into an audio signal. The microphone 605 can then convert the audio signal into an audio signal and output it to the voice processing circuitry 603 which will perform further processing. The voice processing circuitry 603 can demodulate and/or decode audio data from memory 650, or audio data received via the wireless communication processing system 602 and/or the short-distance wireless communications processing circuitry 607. The voice processing circuitry 603 can also decode audio signals received by the microphone 605.

The exemplary user device 600 could also contain a display 620 and a touch panel (630), an operation key (640) and a short distance communication processing circuitry 607 that is connected to an antenna 606. The Liquid Crystal Display (LCD), organic electroluminescence panel display panel or another type of display screen technology could be used for the display 620. The display 620 can display still and moving images data as well as operational inputs, such the buttons 201-204 that control the user device 600. A GUI with multiple screens may be displayed on the display 620, as illustrated in FIGS. 2A-2C allows a user to control various aspects of the 600 and/or other devices. The display 620 can display images and characters (e.g. an overview of FIGS. 2B and 2C are received by the 600 user device and/or stored on the memory 650. They can also be accessed via an external device, such as a camera, from an external network device like a router. The user device 600 could, for example, access the Internet to display text and/or images sent from a Web server.

The touch panel 630 could include a touch panel driver and a physical touch panel display screen. One or more touch sensors may be included in the touch panel 630 to detect an input operation on the touch panel display screen’s operation surface. Touch panel 630 can also detect a touch shape or a touch area. The phrase “touch operation” is used herein. The term “touch operation” refers to input operations that are performed by touching the touch panel display’s operation surface with an instruction object such as a thumb, finger, or stylus-type device. If a stylus, or something similar is used in touch operations, the stylus might include a conductive material at the tip of its stylus so that the touch panel 630 sensors can detect when the stylus touches the touch panel display’s operation surface (similar to the case where a finger is used).

The touch panel 630 can be placed adjacent to the display (e.g. laminated), or integrally with it. The present disclosure assumes that the touch panel (630) is integrally formed with the display 620. Therefore, examples discussed in this document may refer to touch operations performed on the display 620’s surface rather than the touch panel (630). The skilled artisan will know that this is not an exhaustive list.

The present disclosure assumes that the touch panel 630 uses capacitance-type touch panels technology. It should be noted that the present disclosure can be easily applied to other types of touch panel technology (e.g. resistance-type touch panels) using alternative structures. The touch panel 630 may contain transparent electrode touch sensors that are arranged in an X-Y orientation on transparent sensor glass.

The touch panel driver can be integrated in the touch panel630 to control processing related the touch panel630 such as scanning control. The touch panel driver might scan each sensor in an electrode pattern with electrostatic capacitance transparent in the X and Y directions and determine the electrostatic capacitance value to determine the time a touch operation is being performed. The touch panel driver might output a coordinate for each sensor and the corresponding electrostatic capacitance values. A sensor identifier may also be output by the touch panel driver. This can be used to map to a coordinate on a touch panel display screen. The touch panel driver and sensors can also detect when an instruction object (such as a finger) is within a predetermined distance of an operation surface on the touch panel display panel screen. The instruction object doesn’t necessarily have to touch the touch panel display screen in order for touch sensors to detect it and perform the processing described herein. In certain embodiments, touch panel 630 can detect the position of a user?s finger around the edge of the display panel (e.g. gripping a protective cover that surrounds the display/touchpanel). The touch panel driver may transmit signals, e.g. In response to touch operation detection, or a query from another element using timed data exchange, etc.

The protective casing may surround the touch panel 630 or the display 620, and may also contain other elements of the user device 600. The touch panel 630 sensors may detect the position of the user?s fingers on the protective case (but not directly on display 620). The controller 610 can perform the display control processing described in this document based on the position of the user?s fingers gripping the casing. An interface element may be moved (e.g. to a closer location to one or more fingers) depending on the finger position.

“Further in some embodiments, the controller610 may be programmed to determine which hand is holding user device 600 based on finger position. The touch panel 630 sensors might detect multiple fingers on the left hand side of user device 600 (e.g. on the edge of display 620 or the protective casing) and only one finger on the right. The controller 610 might detect that the user 600 is being worn with his/her right arm. This is because the grip pattern detected corresponds to the expected pattern when the 600 is only being worn with the right hand.

“The operation key 64 may contain one or more buttons (e.g. the route button 201-204) and similar external control elements (e.g. the button inputs 101 in FIG. 1), which can generate an operation signal based upon a user input. These operation signals, in addition to those generated by the touch panel 630 may also be sent to the controller 610 to perform related processing and control. The controller 610 may perform the processing or functions associated with external buttons and similar items according to certain aspects of this disclosure. This is in response to inputs on the touch panel display screens 630. This allows external buttons 600 to be removed from the user device 600 and can instead perform touch operations. This improves water-tightness.

“The antenna 606 can transmit/receive electromagnetic waves to/from external apparatuses. The short-distance wireless communications processing circuitry 607 can control wireless communication between external apparatuses. Bluetooth, IEEE 802.11, NFC and near-field communication are all examples of wireless communication protocols that can be used to communicate between devices via the short distance wireless communication processing circuitry 607.”

“The user device 600 could include a motion sensor 608. The user device 600 may include a motion sensor 608 that can detect features of movement (i.e. one or more movements). The motion sensor 608 could include an accelerometer to detect acceleration or a gyroscope for detecting angular velocity. It may also include a geomagnetic sensor that detects direction and a geolocation sensor to locate location. The motion sensor 608 can generate a signal that contains data about the detected motion. The motion sensor 608 can detect a variety of distinct movements within a motion, such as the start and stop of a series of movements, or within a predetermined time period. The user device 600 may also detect a variety of physical shocks (e.g., hitting, jarring, etc.), speed and/or acceleration (instantaneous or temporal), and other motion features. The generated detection signal may include the detected motion features. The detected motion features may be sent to the controller 610. Further processing can be done based on the data in the detection signal. The motion sensor 608 may be used in conjunction with the Global Positioning System (GPS), circuitry 660.

“The camera circuitry 609 may be included in the 600 user device. It includes a shutter and lens for taking photographs of the environment 600 around it. The camera circuitry 609 can capture the surroundings on the opposite side of 600 user device from the user. Display panel 620 can display the images from the captured photos. The memory circuitry stores the captured photos. The memory circuitry can be found within the camera circuitry 609, or may be included in the memory 650. Camera circuitry 609 may be an additional feature that is attached to the user device 600, or it can be built into the camera feature. The camera circuitry 609 is capable of detecting features of motion, i.e. one or more movements, of the user device 600.

The software application running on the user device 600 requests data processing from the server via a wireless network. The server 20 contains a storage controller to manage the database on a hard disk and a query manager to execute SQL (structured queries language) statements against the data on the disk or database. The query manager 650 implements various processing functions, such as query syntax analysis, optimization and execution plan generation. It also provides a simple network communication function that sends and receives signal from a controller.

“In the above description, processes, descriptions, or blocks in flowcharts must be understood to represent modules, segments, or portions of code that include one or more executable instruction for implementing specific logic functions or steps in a process. Alternate implementations are included in the scope of the exemplary embodiments. Functions can be executed in any order other than the one shown or discussed. This would depend upon the functionality involved. As skilled persons in the art would understand.

While certain embodiments have already been described, they are only examples and do not limit the scope of disclosures. The novel methods, apparatuses, and systems described in this document can be implemented in many other forms. Furthermore, it is possible to omit, substitute, or change the form of these methods, apparatuses, and systems without departing from its spirit. These claims and equivalents cover any form or modification that would be within the scope of the present disclosures. This technology could be used for cloud computing, where a single function can be shared and processed by multiple apparatuses over a network.

Click here to view the patent on Google Patents.