Artificial Intelligence – Xue Mei, Naoki Nagasaka, Kuan-Hui Lee, Danil V. Prokhorov, Toyota Motor Corp

Abstract for “Systems and methods to identify rear signals using machine learning”

“System, methods and other embodiments described in this document relate to the identification of rear indicators on a nearby vehicle. One embodiment of a method involves, after detecting a nearby car, taking signal images of its rear. The method also includes computing the brake state of nearby vehicles that indicate whether brake lights are active. This is done by analysing the signal images using a brake classifier. The method also includes computing the turn state for rear turn signal of the nearby vehicle. This indicates which turn signals are currently active by analysing regions of interest in the signal images according a turn classifier. The brake classifier is made up of a convolutional neural net and a long-term memory-recurrent neural network (LSTMRNN). This method provides electronic outputs that identify the braking and turn states.

Background for “Systems and methods to identify rear signals using machine learning”

Autonomous vehicles as well as various safety/advanced assist systems depend on sensors to analyze the data and perform specific functions, such navigating the environment. The sensors receive data about the environment and interpret it to determine how to proceed or take other actions. A vehicle system interprets the sensor data and perceives the environment. It also interprets the actions, locations and trajectories of objects within the environment, such as vehicles. In addition, the ability to identify the rear indicators signals of nearby vehicles can help with anticipating the trajectories of those vehicles and the dynamic aspects of their environment.

However, it is possible to have a number of difficulties when identifying the rear indicators signals of a vehicle. The difficulty in identifying rear indicators signals is illustrated by the need to accurately determine their locations. Because different vehicles may have different rear signal lights configurations, it can be difficult to accurately determine the locations and states. Additional aspects, such as vehicle movement, blinking patterns, brake light patterns, and other factors, can also add to the difficulty. While it is helpful to identify rear indicators signals for a vehicle when operating the mentioned systems, there are many problems that can lead to inaccurate results.

“Examples of systems and methods for identifying rear signal indicators from a nearby vehicle are described in this document. A signal identification system, for example, monitors nearby vehicles and uses a series camera images to identify rear indicators signals. To monitor nearby vehicles, the signal identification system can be embedded in a vehicle. The camera(s), upon detection of a nearby vehicle, captures a series (16 images) of the rear section of that vehicle. This can be used to identify the current state of turn signals or brake signals. A brake classifier, in one embodiment, accepts raw images as electronic input and then analyzes them using a combination deep learning routines. The brake classifier uses a convolutional neural net to first identify spatial features in the images. It then processes the images and outputs the spatial features. The spatial features are then fed into a long-term memory-recurrent neural network, LSTM-RNN. This iteratively processes images in order to determine whether the brake lights have been activated.

“In addition, the signal identification system uses a turn classifier to determine the turn state. It functions in the same way as the brake classifier. The system transforms images before they are fed into the turn classifier to highlight areas of interest. In one embodiment, images are processed to highlight specific regions (i.e. turn signals) and improve identification. An example of this is a motion compensation algorithm that can be applied to images to create flow images that account relative motion between vehicles. The flow images are then compared to identify differences (e.g. areas with changing pixels intensity). The turn classifier is then notified of the regions of interest in the difference images.

“The turn classifier processes regions of interest in the images using a convolutional neural net to further identify spatial features within them. The images are then processed by a separate long-term memory-recurrent neural network (LSTMRNN), which iteratively processes them to identify temporal information that corresponds with the dynamic flashing state of the turn signals. The signal identification system uses this structure to identify rear indicators signals. It overcomes the above difficulties and improves identification by accounting for temporal variations and changes caused by motion and variable luminance.

“In one embodiment, there is disclosed a signal identification system that can identify rear indicators of nearby vehicles. The signal identification system comprises one or more processors, and a memory that can be communicably coupled with the processors. The memory contains a monitoring module that executes instructions to cause one or more processors to capture signal images of a nearby vehicle in response to detection of the vehicle. An indicator module is part of the signal identification system. It contains instructions that, when executed by one or more processors, cause them to: i) calculate a braking condition for brake lights of nearby vehicles that indicate whether they are currently active by analysing signal images according a brake classifier; and ii) calculate a turn state of rear turn signals nearby vehicles that indicate which turn signals are active by analysing regions of interest in the signal images. The brake classifier is made up of a convolutional neural net and a long-term memory-recurrent neural network (LSTMRNN). Instructions for providing electronic outputs to identify the braking and turn states are included in the indicator module.

“In one embodiment, the disclosure of a non-transitory computer readable medium for identifying rear indicator positions on a nearby vehicle is made. Instructions are stored on the non-transitory computer readable medium that, when executed by one processor, cause one or more processors perform the disclosed functions. Instructions include instructions for computing a brake state for nearby brake lights that determines whether they are currently active. This is done by analysing signal images according to a brake classification. A rear section of the nearby vehicle is captured to capture the signal images. Instructions include instructions for computing a turn status for rear turn signals of nearby vehicles. This indicates which turn signals are currently active by analysing regions of interest from signal images according to turn classifier. The brake classifier is made up of a convolutional neural net and a long-term memory-recurrent neural network (LSTMRNN). Instructions include instructions for providing electronic outputs to identify the braking and turn states.

“In one embodiment, there is a method for identifying rear indicators from a nearby vehicle. In response to detecting a nearby vehicle, the method captures signal images of a portion of the nearby car. The method also includes computing the brake state of nearby vehicles that indicate whether brake lights are active. This is done by analysing the signal images using a brake classifier. The method also includes computing the turn state for rear turn signal of the nearby vehicle. This indicates which turn signals are currently active by analysing regions of interest in the signal images according a turn classifier. The brake classifier is made up of a convolutional neural net and a long-term memory-recurrent neural network (LSTMRNN). This method provides electronic outputs that identify the braking and turn states.

“Systems, methods, and other embodiments for identifying rear signal indicators are disclosed in this document. It can be difficult to identify turn signals and brake signals on vehicles, as we have already mentioned. There are many factors that can make it difficult to identify vehicles. These include variations in lighting conditions, relative movements, color thresholds and differences in vehicle configurations. In some cases, manually-defined feature sets are inaccurate and can be misinterpreted when they encounter the noted variations in luminance or other circumstances. It is not easy to identify rear indicators signals accurately.

In one embodiment, the signal identification system monitors nearby vehicles and uses a series camera images to identify rear indicators signals. The signal identification system is embedded in a vehicle and monitors for vehicles nearby. One aspect of the signal identification system is that it monitors nearby vehicles with cameras integrated into the host vehicle. The cameras detect a nearby vehicle and capture a sequence of images, 16 at a time, of the rear section. This can be used to determine the current state of turn signals or brake signals. The sequence of images is used to analyze the current indicator state and capture temporal information about indicators. Because the turn signals and brake lights can flash, images are taken over a time period to capture a dynamic flashing state of the turn signal.

“In both cases, the images are independently analyzed to determine the turn and braking states. One embodiment of a brake classifier accepts raw images as an electronic input. Then, it analyzes the images using deep learning routines. The brake classifier determines the spatial features in the images first using a convolutional neural net to process them and then outputs these spatial features. The spatial features are then fed into a long-term memory-recurrent neural network, LSTM-RNN. This iteratively processes images with defined spatial features to determine whether the brake lights have been activated.

A turn classifier, which functions in the same way as the brake classifier, determines the turn state. The images are then transformed to highlight areas of interest before being fed into the turn classifier. In one embodiment, images are transformed to highlight specific regions (e.g. turn signals) and improve identification. An example of this is a motion compensation algorithm that can be applied to images to create flow images that account relative motion between vehicles. The flow images are then compared to identify differences (e.g. areas with changing pixels intensity). The turn classifier is then notified of the regions of interest in the difference images.

“The turn classifier processes the regions of particular interest in the images using a convolutional neural net to further identify spatial features within them. The images are then processed by a separate long-term memory-recurrent neural network (LSTMRNN), which iteratively processes them to identify temporal information that corresponds with the dynamic flashing state of the turn signals. The signal identification system uses this structure to identify rear indicators signals. It overcomes the above difficulties and improves identification by accounting for temporal variations and changes caused by motion and variable luminance.

Referring to FIG. 1 illustrates an example of vehicle 100. A’vehicle’ is defined as: Any form of motorized transportation. The vehicle 100 can be described in one or more embodiments as an automobile. Although the arrangements are described with regard to automobiles in this article, it will be clear that other embodiments can be used. The vehicle 100 can be any motorized transport, which, in some cases, may benefit from the identification of a turn state and brake indicators of nearby vehicles, as described herein.

“The vehicle 100 also contains various elements. In some embodiments, it may not be necessary that the vehicle 100 include all elements in FIG. 1. Any combination of elements in FIG. 100 is possible for vehicle 100. 1. Additional elements can be added to the vehicle 100 as shown in FIG. 1. The vehicle 100 can be built without any of the elements in FIG. 1. In FIG. 1, the elements are shown to be located inside the vehicle 100. It will be clear that any one of these elements may be found outside the vehicle 100. The elements may also be physically separated over large distances.

FIG. 1 shows some of the elements that could make up the vehicle 100. 1, and the following figures will be described. However, many elements of FIG. After the discussion of FIGS. 2-8, a description of many elements in FIG. For the sake of simplicity, this description will only include FIGS. 2-8. It will also be appreciated that reference numerals were used where necessary to indicate the corresponding or similar elements in each figure. This is done for clarity and simplicity. The discussion also provides details that will help you understand the various embodiments. However, those skilled in the art will be able to see that embodiments can be used with different combinations of these elements.

“In both cases, the vehicle 100 contains a signal identification device 170 which is used to detect nearby vehicles and identify rear indicators of those vehicles. Further discussion of the figures will make it clearer what these functions and methods are.

“With reference to FIG. “With reference to FIG. Further illustrated is FIG. 1. FIG. 1 shows the signal identification system 170, which includes a processor 110 from vehicle 100. 1. The processor 110 could be part of the signal ID system 170. In other words, the signal recognition system 170 might include a separate processor than the processor 110. Or, the signal identity system 170 could access the processor 110 via a data bus, or another communication pathway. The processor 110 is shown as part of signal identification system 170. In one embodiment, the signal ID system 170 also includes a memory 220 that stores a monitoring and indicator module. The memory 210 can be a random-access memory, read-only memory or flash memory. It stores the modules 220, 230. For example, the modules 220 and 233, are computer-readable instructions which, when executed by processor 110, cause processor 110 to perform various functions described herein.

Accordingly, the monitoring module 220 includes instructions that control the processor 110 to acquire sensor information from, for instance, one or more sensors in the sensor system 120. One embodiment of the sensor data comprises images from a camera 126, which shows an area in front 100 of the vehicle that is likely to detect a vehicle’s rear. Further aspects include the control of multiple cameras 120 of the sensor system 120 at different points on the vehicle 100. This allows for additional views of the environment. In addition, the monitoring module 220 is capable of controlling a Lidar 124 and radar 123 as well as the camera 126. These sensors can be used to determine if a nearby vehicle exists.

“In both cases, the monitoring module 220 monitors an electronic stream sensor data from the camera or other sensors for the nearby car. The monitoring module 220 collects sensor data and analyzes it using vehicle recognition techniques to determine if the nearby vehicle is present. In various embodiments, the monitor module 220 can use image recognition, LIDAR object detection, radar recognition or combination of these techniques to identify the presence of nearby vehicles. You should understand that different forms of vehicle/object recognition have different properties. They can be used in different situations to suit each implementation. There may be different distance thresholds that allow objects to be identified using different recognition techniques. Image recognition is used to identify vehicles within a specified distance, such as 100 feet, according to one embodiment. This allows for additional feature identification, such as turn and brake lights, to be carried out by determining the presence of nearby vehicles.

“Once the vehicle near you is detected or as part of that detection, the monitoring module 220 captures signal images of the vehicle nearby. The monitoring module 220 generally captures signal images for a predetermined time period. The defined time period is in one embodiment 0.5 seconds. However, it may be shorter or longer depending on specific implementation aspects. A lower limit for the defined time period could be chosen to capture at least one turn signal cycle, while an upper bound can be set by the amount of memory available. A frame rate of the camera (126) and/or other factors may also affect the number of images that are captured over the time period. The monitoring module 220 typically acquires 16 images from the series of signals images for a time period of 0.5 seconds.

“Additionally the signal images show a rear section (i.e. the aft portion) of the nearby car so that both the left and right turn lights as well as brake lights are visible in the signal images. While the main implementation is the rear section of a vehicle and its rear indicator signals are described herein, further aspects of the disclosed systems can be used to identify other indicators, such as side turn signals, front turn signals, and side turn signals (e.g. side-view mirror turn signal), etc. In the event that the monitoring module 220 captures signal images with a view from the nearby vehicle, such as partially covering the rear section of a vehicle or occludes some of the rear indicators signals, the monitoring module 220-controls the camera 126 to capture replacement images or proceeds with processing the occluded images. The indicator module 230 is capable of providing at most partial identification.

“Generally, the rear indicators signals of the nearby car include brake lights that indicate whether the vehicle is braking, and left/right turn signal/lights/lights that indicate if an operator of nearby vehicle has activated a left or right turn indicator, or hazards. Referring to FIG. FIG. 3 illustrates an example of different braking and turn states with examples of vehicles. The signal identification system 170 can identify eight possible states that combine braking and turning states. FIG. 3 shows examples (a-(h). The different states are identified by distinct letters. A first letter indicates whether brake lights are active at the moment with?O? indicating off, and?B? Indicating that the brake lights have turned on. The left turn signal is activated by a second letter with the?L? The second letter corresponds to the left turn signal with?L? The second letter indicates that the left turning signal is inactive. The third letter corresponds to the right turn signals and indicates that the right signal is active with??R? While?O? indicates that the right turn signal is inactive for the third letter.” while?O? indicates that the right turn signal for the third letter is inactive.

“Accordingly, the example (a), labeled ‘OOO? All indicator signals are inactive. Example (b), labeled “BOO?” The label?BOO? in Example (b) indicates that both the turn signals and the brake lights are activated. Example (c), labeled “OLO?” This indicates that the left turn signal is active at the moment. Example (d), labeled “BLO?” Both the brake lights as well as the left turn signal must be active. Example (e) marked?OOR? Indicates that only the right turn signal is active. Example (f) marked?BOR The label?BOR? indicates that both the brake lights and the right turn signal are activated. Example (g), labeled “OLR?” Both turn signals are activated and hazards are active, as shown in Example (g). Example (h), labeled “BLR?” This indicates that both the turn signals and brake lights are active, and the hazards are also active. In various aspects, the signal identification software uses the labels in FIG. 3. to encode the output that indicates which rear indicator signals are active.

“With further reference at FIG. “With further reference to FIG. In response to the monitoring modules 220 and 220 detecting nearby vehicles, the indicator module 230 takes the signal images from the monitoring module 220-230. The indicator module 230 receives the signal images from the monitoring module 220 as an electronic input. It then provides an electronic output that determines the turn state and brake state.

“Another matter before we discuss additional aspects of the indicator modules 230, in one embodiment the signal identification system 170 contains a database 244. For example, the database 240 can be an electronic data structure that is stored in the memory 220 or another electronic storage device. It is equipped with routines that the processor 110 can execute to analyze stored data, provide stored data, organize stored data and so forth. In one embodiment, the database 220 stores data that is provided by modules 220 or 230 for various functions. One embodiment of the database 240 contains a brake classifier 250, and a turn classification 260. The database 240 can also contain signal images and/or any other information that is used by modules 220 or 230.

“The classifiers 250 & 260 are computational models that model aspects in relation to images of vehicles and rear-signal indicators. The indicator module 230 uses both the brake classifier 250 (to analyze signal images) and the turn classifier 265 (to determine the turn state and the braking state) to calculate the indicator module. The classifiers 250 & 260 are shown as being stored in the database 240. However, it is important to understand that other components of the classifiers 250 & 260 can be incorporated with the indicator module. The classifiers 250 & 260 are functionally complex components made up of functional blocks and data that function together to indicate the probabilities of different brake and turn state probabilities.

“Also, both the brake classifier 250 (and the turn classifier 266) use a similar combination to process the signal images. The brake classifier 250 (CNN), and the turn classifier 260 (LSTMRNN) both include a convolutional and long-term memory recurrent neural networks (CNN). The CNN works by identifying and extracting spatial features using an iterative process that involves configuring the signal images, pooling the results of the convolving, then repeating the process with the pooled data from the previous iteration. Indicator module 230 implements CNN in this way until a final connected layer outputs a featuremap or other general indications of spatial features of images after several iterations (e.g. 5 iterations). The indicator module230 then uses the CNN’s spatial information as an electronic input to the LSTM. The LSTM is a type recurrent neural network that determines the temporal relationships and other temporal information about spatial features identified by CNN. The LSTM-RNN also includes aspects that adjust for the changes between images to identify dynamic flashing state of turn signals and to account for variations at luminance. The indicator module 230 uses the LSTMRNN to predict the braking and turn states. The indicator module 230 uses this prediction to produce an output that indicates the specific states as probabilities or statistical likelihoods.

“Additionally as noted, the turn classifier 260 and brake classifier 250 separately implement versions the CNN and LSTMRNN. The differences refer to the specific aspects of how each version accepts the signal images, and how they are trained to recognize the signals. The brake classifier 250 is trained by a large number of images of the rear sections of vehicles with brake lights in different states. Images can also have different properties, such as different lighting conditions and color profiles. The data set should be labeled so that the brake classification 250 can use back propagation, or another technique, to train the CNN and LSTM-RNN of brake classifier 250 to correctly identify braking states.

The turn classifier 260 is similarly trained by a large set of images that includes vehicles with different turn signals (e.g. left turn, right turn, hazards), which are also labeled for backpropagation or other training methods. The indicator module 230 also processes the signal images before they are fed into the turn classifier 265. The indicator module 230 compensates, for example, for the possibility that the vehicle 100 and nearby vehicles captured in signal images are moving. The indicator module 230 can process the signal images to align nearby vehicles between successive signal images. It then compares the images to identify differences, produces difference images and extracts regions of interest from these difference images. These are the areas of left and right turn signal signals. The indicator module 230 can then feed regions of interest from difference images to the turn classifier 265. This allows the turn classifier 265 to focus on turn signals and avoid variations (e.g. reflections, etc.). In the signal images, or other aberrations that could distract from the identification of the turn state.”

“We will also discuss additional aspects of the neural network in detail in relation to the following figures. It should be noted that the indicator module 220 performs both the identification of brake states and turn states simultaneously. While the braking state is the first topic, it is important to understand that the order of the indicators is not dependent on or particularly important. The indicator module 230 controls both the brake classifier 250 (which outputs predictions of the braking and turn states, respectively) and the turn classifier 265. The indicator module230 can further process the states provided to determine the soft states. The indicator module 230 can also provide information about the probability that each signal is active. The indicator module 230, on the other hand, can make a decision as to whether the signals are active. 3.”

In either case, the indicator 230 outputs information that indicates the turn state or braking state. The indicator module 230 may provide outputs to either the autonomous driving module 160 or one of the vehicle system 140. This allows the vehicle 100 to be controlled in various ways. The indicator module 230 can generate indicators for a driver based on the turn state and/or braking state to notify the driver about actions of nearby vehicles. The indicator module 230, for example, can display graphics on a head-up display, in the car, rear-view mirror or any other display. Alternately, the indicator module can give audible warnings about the turn and/or brake state.

As previously mentioned, the indicator module230 can provide information about the turn state and/or braking status to the autonomous driving module 160. The autonomous driving module 160 uses the vehicle’s turn state and/or braking state in order to provide information about autonomous operation and advanced driver assistance components regarding objects/obstacles and trajectories.

FIG. 4. FIG. FIG. 4 shows a flowchart for a method 400 which is used to identify a turn state or a brake state of a nearby car as a function the rear indicators signals. The signal identification system 170 in FIGS. will provide an overview of method 400. 1. and 2. Method 400 is discussed with the signal ID system 170. However, it is important to understand that the method 400 can be implemented in any system.

“At 410 the monitoring module 220 monitors vehicles near the vehicle 100. One embodiment of the monitoring module 220 uses image recognition techniques to monitor video images taken by a camera 126. The monitoring module 220 generally determines whether the nearby vehicle is within the vehicle 100’s defined range. The monitoring module 220 can determine if the orientation of the vehicle in front is acceptable (e.g., facing away). This information can be used to further analyze the data according to the method 400. The monitoring module 220 also has the ability to monitor additional sensors like radar and LIDAR in order detect nearby vehicles and/or add detection using camera 126. The monitoring module 220 monitors the surrounding environment 100 to identify the presence of nearby vehicles so that the rear indicator signals can also be identified. A general configuration of the vehicle 100 may have the camera 126, which has a view directed in front 100. The camera 126 can have a different field or additional cameras can be added with 360-degree views around the vehicle 100.

“Moreover, although a single vehicle is being discussed, it is important to remember that signal identification system 170 can simultaneously monitor, identify and determine rear indicators states for multiple vehicles nearby.”

“At 420 the monitoring module 220 captures signal pictures of a nearby vehicle detected as 410. One embodiment captures signal images in a series of images that are taken over a time period. The monitoring module 220 can thus acquire temporal information about nearby vehicles that allows it to detect changes in turn signals or brake signals. The monitoring module 220 can capture sequences of images taken by the camera to record the cycle of turn signals. The monitoring module 220 can also acquire the sequence of images in order to verify that the brake lights are activated and not just briefly. In one embodiment, the monitoring module 220 can capture images over a time period that corresponds to a standard blinking pattern for vehicle turn signals (e.g. >=0.5 seconds). The frame rate of camera 126 can influence the number of images in the sequence of signal images. The monitoring module 220 can be set up to capture 16 images. If the frame rate is 30 frames per second, the time period is 0.5 seconds. If the frame rate is lower or higher, the number of images can be increased or decreased to capture the complete cycle of the turn signal’s dynamic flashing. The signal images are generally captured at 30 frames per second over a time period that corresponds to at least one cycle (e.g. on and off) of the turn signal.

“Continuing with method 400, as noted previously, the brake and turn classifier 250 share a similar structure but are tailored for the specific signal to be identified and are also trained for that signal. FIG. 4. Block 430 corresponds to processing by brake classifier 250, while block 470 corresponds to processing by turn classifier 226. Blocks 440, 455, and 460 are pre-processing signals images before they are electronically processed by the turn classifier. 260.

“At 430 the indicator module 230 analyses the signal images with the brake classifier 250. The indicator module 230 analyses the signal images using brake classifier 250. This uses two distinct neural networks to process the signals. The indicator module 230 extracts the spatial features of nearby vehicles from signal images. This is done using a convolutional neural system that can be trained to recognize brake lights and distinguish between different features. In one embodiment, the indicator 230 condenses the signal images into layers spatial features and pools them over multiple iterations. The CNN of the brake classifier 250 is used to identify the spatial features and generate an electronic output that corresponds.

“The indicator module 230 then uses the spatial features to input a long-short-term memory recurrent neural (LSTMRNN). One embodiment provides the spatial features along with the signal pictures to allow for labeling aspects of the signal photos. The indicated spatial features are used to identify the locations and types features in the signal images. The indicator module 230 implements the LSTMRNN for brake classifier 250 to determine or learn about temporal dependencies among signal images that indicate the braking status of nearby vehicles. The LSTMRNN aspect is a way to analyze signal images and account for the temporal relationships among spatial features as they progress. The indicator module 230 is able to determine the spatial features (e.g. brake lights) within signal images. It can also analyze spatial features across signal images for changes or general characteristics that could indicate whether brake lights are active.

“Fig. 5. FIG. FIG. 5 shows a diagram showing an example structure 500 for the brake classifier 250, and the turn classifier 262, as described herein. The example structure 500 of the classifiers 250 & 260 has a convolutional neural net 510 and a long-term memory (LSTM), recurrent neuralnet (RNN), 520. Also known as LSTM520. CNN 510 accepts input as a series images. The input is condensed and pooled in five different iterations 530 to 540 to 550 and 560 to the CNN 510. It is important to note that iteration 55 includes multiple convolutions and not just one convolution.

The CNN 510 usually convolve the image or the results of a previous iteration by configuring a filter or set filters across the input to produce a filtered outcome. Filters are used to identify different aspects of an image, such as color patterns and shapes. A pooling layer, for instance, is a nonlinear down-sampling like max pooling. The pooling layers divide an input image into non-overlapping regions. Each region is characterized according, for instance, to predominant aspects of the filtering results of the convolving layer. The CNN 510 reduces the input size to further identify spatial features. As shown in FIG. 5, a fully connected layer (fc6) is created. 5. This is the output of the CNN 510 and the LSTM 520. CNN 510 outputs a characterization for spatial features in signal images for the braking classification 250 and a characterization for regions of interest for turn classifier 260. The CNN 510 also executes each image separately.

“The LSTM 520 consists of an LSTM functional blocks 570, which includes several nonlinear activation gate and further functional elements. The LSTM 520 is used to determine the temporal relationships (i.e. information) between images in a series of signal images. Using the LSTM 520 allows for long-term dependencies (i.e. the signal images) to be maintained during analysis. This is without losing information from the dynamic flashing state of the turn signals.

FIG. 6. FIG. 6 shows that the LSTM functional blocks 570 can accept inputs of Xt and ht-1. Xt is the spatial feature that was displayed by fc6 at the CNN 510 at the time t. Input ht-1 is a hidden unit that was used in a previous time step. The LSTM 570 also estimates the hidden unit ht at each successive time step (e.g. each iteration LSTM block 570). This is passed to the next iteration as input ht-1. The LSTM block 570 also receives stored data from a Ct-1 memory cell that contains information from a previous version. Each iteration, the memory cell is updated with new calculated information and passed on to the next iteration. FIG. FIG. 6 shows that the LSTM block570 has different gates. One embodiment of the LSTM Block 570 contains a forget gate, ft that determines what to discard from xt or ht-1. The forget gate is a sigmoid (?) function. This function, for example, outputs values between 0 and 1 and performs elementwise product with previous memory cell states Ct-1 to determine which information to discard or keep.

“Additionally it updates information to Ct and the hyperbolictangent (tan-h) layer. It and gt control what to recall from xt or ht-1 and add the values to give the memory cell Ct for the next iteration. The LSTM block 570 updates the memory cell Ct and determines the hidden state ht at every iteration. A gate is used to compute and weight the memory state Ct via Tan h. The output is then sent to a fully connected layer, fc8, as illustrated in FIG. 5. This means that the output from each iteration is used for computing a class probability of 580, as shown in FIG. 5. Because sufficient temporal information is stored within the LSTM block570, a final output of block 580 can be used to account for the temporal dependencies in an input sequence. This allows the example classifier 500 to account for spatial dependencies when determining rear signal indicator states. The brake classifier 250 also analyzes signal images in a similar manner. It uses a braking CNN as well as a braking LSTMRNN to identify brake light states.

“With reference to FIG. “With continued reference to FIG. This is because the signal images may be captured while the nearby vehicle or vehicle 100 is moving. Therefore, the position of the nearby car within each signal image can differ. The block 450 signal images can be distorted due to misalignment of nearby vehicles between images in different orientations and positions. The indicator module 230 adjusts for movement by processing signal images to create flow images. These images are then transformed to align the nearby car between successive signal images. The indicator module 230 uses a scale-invariant feature transformation flow algorithm (SIFT), to transform the signal images into flow images at 440.

“At 450 the indicator module 230 generates different images from flow images. The indicator module 230, in one embodiment, compares the flow images and generates differences images. The difference images are a subtraction of successive flow images. Thus, the difference pictures indicate areas of changed pixels among successive flow images. The difference images are created to highlight areas of change in successive images so that the turn classifier can concentrate on the areas.

“Finally, FIG. 7 shows how the indicator module (230) generates the difference images. 7 shows an example 700 of compensating motion, as described at 440, and then generating the difference images from that information as explained at 450. Image (a) is a previous image from the signal images, while image (b), is the current image for which indicator module 230 generates the difference image. The indicator module 230 processes image (b) first to align the vehicle in the vicinity with the posture and position of the picture in image (a). Image (d) shows the flow image, also known as the warped picture. Image (c) depicts how image (b) was shifted to compensate. Image (e) is a difference image created by the indicator module230 when it performs an absolute comparison between image d and image a. The image (e), which is a difference picture, illustrates how a right turning signal is highlighted in the difference image (e). This is because the indicator module 230 flashes to an on state from images (a and (b).

“At 460 the indicator module 230 extracts areas of interest from the different images. The indicator module 230 extracts regions from the difference images that correspond with the left and right turn signals in one embodiment. Referring to FIG. Referring to FIG. 8.8, a difference image 800 is shown that corresponds to image (e). 7 is further illustrated. FIG. 7 illustrates how the indicator module 230 overlays a grid over the difference image. 8. The indicator module 230 uses the grid pattern to identify regions (i.e. localized sub-portions 800) that correspond with the left and right turn signals. FIG. 8. The indicator module 230 identified the regions 810-820. The indicator module 230 also repeats the process for the different images. The indicator module 230 creates a separate set of regions of interest that correspond with the left and right turn signals. The electronic inputs to the turn classifier 265, 230 are made up of the regions of interest. They replace the full signal images that were used by the brake classification 250.

“At 470 the indicator module 230 analyses the regions of interest derived using signal images and the turn classifier 265. The indicator module 230, in one embodiment, analyzes the regions-of-interest (ROIs), using the turn classifier 265. It does this by applying two distinct neural networks. 5. The indicator module 230 uses a turn CNN from the turn classifier 262 to generate spatial features as an electronic output.

The indicator module 230 then uses the ROIs and spatial features as input to a turn short-term memory recurrent neural net (LSTMRNN). One embodiment provides the spatial features along with the ROIs to allow for labeling aspects of the ROIs. The indicated spatial features are used to identify the locations and types within the ROIs. The indicator module 230 implements the turn LSTM -RNN function for the turn classifier 260 to determine or learn about temporal dependencies among the ROIs that indicate the vehicle’s turn state. The indicator module 230 is able to determine spatial features (e.g. turn lights) within the ROIs. It can also analyze spatial features across ROIs for changes or general characteristics that could indicate whether different turn lights are active.

“At 480 the indicator module230 provides electronic outputs of the turn state and braking state. In one embodiment, an indicator module 230 communicates the state by electronically communicating turn state and brake state to one or more vehicles 140 or modules 160 (e.g. module 160). The indicator module 230 also displays the state information to the driver to notify them about nearby vehicles.

“The indicator module 230, for example, uses the turn and braking states to determine how to modify the operating parameters of one or more vehicle system 140. If the indicator module230 determines that brake lights are active, an automatic collision avoidance device can be activated if it is within a certain range of the vehicle. If the indicator module230 detects that the left turn signal has been activated, the autonomous driving module 160 can plan a route around nearby vehicles. The output of the indicator module230, which indicates the status of the rear indicators signals, can be used to inform many systems of the vehicle 100.

“FIG. “FIG. 1” will be described in detail as an example environment where the system and methods described herein could operate. The vehicle 100 can be configured to select between an autonomous mode and one or more semi-autonomous operating modes. This switching can be done in a way that is suitable, as well as other methods later developed. ?Manual mode? Manual mode means that the vehicle’s navigation and/or maneuvering is done according to inputs from the user (e.g. a human driver). The vehicle 100 may be a traditional vehicle, but it can also be configured to work in a manual mode.

“The vehicle 100 can be described as an autonomous vehicle in one or more embodiments. “Autonomous vehicle” is the term used herein. A vehicle that can operate in autonomous mode. ?Autonomous mode? Autonomous mode refers to the ability to navigate and/or maneuver the vehicle 100 along a route using one or more computing devices to control the vehicle 100 without the need for human input. One or more embodiments of the vehicle 100 can be fully or partially automated. One embodiment of the vehicle 100 has one or more semi-autonomous operating modes. In this mode, one or more computing systems perform part of the navigation or maneuvering of a vehicle along a route. A vehicle operator (i.e. driver) provides inputs for the vehicle to perform a portion or navigation or maneuvering of 100 along a route.

“The vehicle 100 may include one or more processors 110. The processor(s), 110 can be the main processor of the vehicle 100 in one or more arrangements. The processor(s), 110 could be an electronic control device (ECU). One or more data storage 115 can be included in vehicle 100 to store one or more types data. The data store 115 may contain volatile or non-volatile memories. You can choose from RAM (Random Access Memory), Flash Memory, PROM (Programmable Read Only Memory), EPROM [Electrically Erasable Programmable Read Only Memory], EPROM (Programmable Read Only Memory), EPROM [Electrically Erasable Programmable Read Only Memory], EPROM (Electrically Erasable Programmable Read Only Memory], EPROM (Electrically Erasable Programmable Read Only Memory), registers and magnetic disks or optical disks as well as hard drives or any other storage media or any combination of these or any other suitable medium or any combination. The data store (115) can be an integral part of the processor(s), 110 or the data store (115) can be operatively connected with the processor(s), 110 for use there. The term “operatively connected” is used here. As used in this description, the term “operatively connected” can refer to direct or indirect connections as well as connections that are not directly physical.

“Map data 116 can be included in one or more arrangements. Map data 116 may include maps from one or more geographical areas. The map data 116 may include information or data about roads, traffic control devices and road markings. Map data 116 may be in any format. The map data 116 may include aerial views of the area in certain cases. The map data 116 may include ground views, including 360-degree views. The map data116 can contain measurements, dimensions, distances and/or information about one or more items in the map 116, and/or relative items in the map 116. A digital map can be included in the map data 116 with information about road geometry. Map data 116 may be of high quality or highly detailed.

“In one or several arrangements, the map data (116) can include one or multiple terrain maps 117. Terrain map(s), 117 can contain information about ground, terrain roads, surfaces and/or other features in one or more geographical areas. The terrain map(s), 117 can also include elevation data for one or more geographical areas. Map data 116 may be of high quality and/or very detailed. One or more terrain maps 117 can be used to define ground surfaces. These can include roads and unpaved roads.

“In one or multiple arrangements, the map data (116) can include one or several static obstacle maps (118). Information about static obstacles can be included in the static obstacle map(s), 118. What is a “static obstacle?”? A static obstacle is a physical object that does not change in its position or significantly change over time. It also does not have a change in size. Static obstacles can be trees, buildings and medians. Static obstacles may be objects above ground. One or more static obstacles can be included in the static map(s), 118. They can include location data, size, dimension, material data and/or any other data. Static obstacle map(s), 118 can contain measurements, dimensions, distances and/or information about one or more static barriers. The static obstacle map(s), 118 can be very detailed and/or high-quality. You can update the static obstacle map(s), 118 to reflect any changes in a mapped area.

“The one or more data storages 115 can contain sensor data 119. “Sensor data” is used in this context. Any information regarding the sensors the vehicle 100 has, including their capabilities, is called “sensor data”. The vehicle 100 may include the sensor system 120, as will be explained later. The sensor data 119 may relate to any one or more sensors in the sensor system 120. In one or more arrangements, sensor data 119 may include information about one or more LIDAR sensor 124 of the sensor network 120.

“In certain instances, at most a portion the map data (116) and/or sensor data (119) can be found in one or more of the data stores 115 located aboard the vehicle 100.” Alternativly, or in conjunction, at least one-fifth of the map data (116) and/or sensor data (119 can be found in one or more remote data stores 115.

“The vehicle 100 may include the sensor system 120, as noted above. One or more sensors can be included in the sensor system 120. ?Sensor? Any device, component, or system that detects, and/or senses something. One or more sensors can be set up to detect and/or sense something in real-time. The term “real-time” is used herein. “Real-time” refers to a level in which a user or system perceives that is sufficiently immediate to make a decision or process, or to allow the processor to keep pace with an external process.”

“In the case of a sensor system 120 that includes multiple sensors, the sensors may work independently from one another. You can also use two or more sensors together. The two or more sensors could form a sensor network. The sensor system 120, or one or more sensors, can be connected to the processor(s), 110, the data store(s), 115 and/or another element in the vehicle 100 (including any elements shown in FIG. 1). “1.

“The sensor system 120 may include any type of sensor. Here are a few examples of various types of sensors. It will be clear that the descriptions are not limited to the specific sensors. One or more vehicle sensors 121 can be included in the sensor system 120. The vehicle sensor(s), 121 can sense, detect, or determine information about the vehicle 100. The vehicle sensor(s), 121 can be set up to sense and/or detect changes in the vehicle’s position or orientation 100. This could include, for instance, using inertial acceleration. One or more vehicle sensors 121 may include one or several accelerometers, one, or more gyroscopes and an inertial measuring unit (IMU). The vehicle sensor(s), 121 can be set up to sense, detect and/or sense one or several characteristics of the vehicle 100. The vehicle sensor(s), 121 may include a speedometer for determining the current vehicle speed 100.

“Alternatively or additionally, the sensor 120 can also include one or more environment sensors, 122 which are configured to sense and acquire driving environment data. What is driving environment data? Data or information about the environment in which an autonomous car is located, or one or more parts thereof. One or more environment sensors 122, for example, can be used to detect, quantify, and/or sense any obstacles in the vehicle’s 100 exterior environment. They also have information/data about these obstacles. These obstacles can be either stationary or dynamic. One or more environment sensors (122) can be set up to detect, measure and quantify other objects in the vehicle’s external environment 100. These include traffic signs, traffic signals, lane markers, traffic signs, traffic signs, crosswalks, curbs close to the vehicle 100, and off-road objects.

Here are several examples of sensors that make up the sensor system 120. These example sensors could be part of one or more sensors for the environment 122 or one or two vehicle sensors 121. It will be clear that the embodiments described here are not exclusive to the specific sensors.

“An example is that the sensor system 120 may include one or two radar sensors 123, one, or more LIDAR scanners 124, and/or one, or more cameras 126. One or more cameras 126 may be either high dynamic range (HDR), or infrared cameras (IR) cameras.

“The vehicle 100 may include an input system 130. “Input system” is a device, component, system, element or group that allows information/data to be entered into a vehicle. An “input system” is any device, component or system that allows information/data to enter a machine. A vehicle passenger can input data to the input system 130 (e.g. A driver or passenger. An output system 135, which can be included in vehicle 100, is possible. An “output system” is a device, component, or arrangement that allows information/data to be presented to a vehicle passenger. An?output system? refers to any device, component, arrangement, or group thereof that allows information/data to be presented/presented to a vehicle passenger (e.g. a person, a vehicle passenger, etc.).”

“The vehicle 100 may include one or more vehicles 140. FIG. shows several examples of one or more vehicle system 140. 1. The vehicle 100 may include more, less, or different systems. Although specific vehicle systems may be identified separately, any or all of them can be combined or separated via hardware and/or the software contained within the vehicle 100. A vehicle 100 may include a propulsion and braking system 141, a steering system, 143, throttle systems 144, throttle system, throttle system, 144, a transmission, 145, signaling system, 146, and/or navigation system. These systems may include any combination of components and devices that are now or later developed.

“The navigation system 147 may include one or more applications and/or combinations thereof now known or later developed. It can be used to determine the geographical location of the vehicle 100, or to determine a travel route 100 for it. One or more mapping applications can be used to determine the travel route 100 for the navigation system 147. The navigation system 147 may include a global or local positioning system as well as a geolocation system.

“The processor(s), 110, the signal recognition system 170 and/or autonomous driving module(s), 160 can be operatively linked to communicate with various vehicle systems 140 or individual components thereof. Referring to FIG. 1. The processor(s), 110 or the autonomous driving module(s), 160 can be in communication to receive and/or send information from various vehicle systems 140 to regulate movement, speed, maneuvering, direction, and so on. 100. These vehicle systems 140 may be controlled by the processor 110, the signal identification 170 and/or the autonomous driver module(s), 160. They may also be fully or partially autonomous.

“The processor(s), 110, the signal recognition system 170 and/or autonomous driving module(s), 160 can be operatively linked to communicate with various vehicle systems 140 or individual components thereof. Referring to FIG. 1. The processor(s), 110, signal identification system 170 and/or autonomous driving module(s), 160 can all be in communication to transmit and/or receive information to the various vehicle systems 140. This information is used to control movement, speed and maneuvering as well as direction. 100. These vehicle systems can be controlled by the processor 110, the signal identification 170 and/or the autonome driving module(s), 160.

“The processor(s), 110, the signal ID system 170 and/or autonomous driving module(s), 160 may be able to control navigation and/or maneuvering in the vehicle 100 by controlling one of the vehicle systems 140 or components thereof. The processor(s), 110, signal identification system 170, or the autonomous driving module(s), 160 can control the vehicle’s direction and/or speed 100 when it is operating in autonomous mode. The processor(s), 110, signal identification system 170 and/or autonomous driving module(s), 160 can cause the vehicle 100’s speed to increase (e.g. by increasing fuel supply to the engine), decrease (e.g. by applying brakes and/or decreasing fuel supply to the engine) and/or change its direction (e.g. by turning the front wheels). One embodiment of the signal identification system 170 collects data from the processor 110 or the autonomous driving modules 160 about the control signals that cause the vehicle’s acceleration, deceleration, and other maneuvers. It can also determine why the autonomous driver module 160 caused the maneuvers. As used herein, ?cause? or ?causing? It is used to cause, make, compel, direct and command an event or act, or at least to be in a position where such event/action may occur.

One or more actuators 150 can be included in the vehicle 100. Any element or combination thereof that can modify, adjust, and/or alter any of the vehicle’s 140 systems or their components to respond to signals or inputs from either the autonomous driving module (s) 160 or the processor(s 110) 150 may be considered an actuator. Any actuator can be used. One or more actuators 150 could include motors and pneumatic actuators, hydraulic pistons and relays. Solenoids, solenoids and/or piezoelectric actuators are just a few examples.

“The vehicle 100 may include one or more modules. At least some of these are described herein. Modules can be implemented in computer-readable code that executes one or more of these processes when executed by a 110 processor. Either one or more modules can be part of the processor(s), 110 or any combination of them can be executed by and/or distributed to other processor systems to which the 110 is operatively connected. Modules can contain instructions (e.g. program logic) that are executable by one of the processor(s). Alternately, or in combination with one or more data stores 115, such instructions may be contained.”

“In one or more arrangements one or more modules described herein may include artificial or computation intelligence elements, such as fuzzy logic, neural network, or other machine-learning algorithms. In one or more arrangements, any one or more modules can be distributed among the many modules listed herein. Two or more modules can be combined to form a single module in one or more of these arrangements.

“The vehicle 100 may include one or more autonomous-driving modules 160. The autonomous driving module(s), 160 can receive data from the sensor 120 and/or other types of systems capable of recording information about the vehicle 100, and/or its external environment 100. The autonomous driving module(s), 160 can generate driving scene models using such data in one or more of several arrangements. The vehicle’s position and speed can be determined by the autonomous driving module(s), 160. The 160-bit autonomous driving module can locate obstacles and other environmental features, such as traffic signs, trees, shrubs or neighboring vehicles, pedestrians, etc.

“The autonomous driving module(s), 160 can be configured for receiving, and/or determining location information for obstacles in the vehicle’s external environment 100 for use by processor(s), 110 and/or one of the modules described in this document to estimate position, orientation, and vehicle position in global coordinates using signals from a plurality satellites or any other data or signals that could be used in determining the vehicle’s current state 100 or its position with respect to its environment 100 for use in creating a map, or determining the vehicle 100 relative to map data.

“The autonomous driving module(s), 160 can be used either alone or in conjunction with the signal identification 170 to determine travel route(s), current autonomous maneuvers for vehicle 100, future autonomous maneuvers and/or modifications of current autonomous driving tactics based on data obtained by the sensor 120, driving scene models and/or any other suitable source. ?Driving maneuver? A driving maneuver is one or more actions that alter the vehicle’s movement. Driving maneuvers can include accelerating, decelerating and braking, turning, moving along a lateral path of the vehicle 100 or changing travel lanes. Configuration of the autonomous driving module(s), 160 allows for certain driving maneuvers to be implemented. These autonomous driving maneuvers can be made possible by the autonomous driving module(s). 160. As used herein, ?cause? or ?causing? The act of causing an event or other action. The autonomous driving module(s), 160 can be set up to perform various functions, such as transmitting data to, receiving data from, interfacing with, or controlling the vehicle 100, or any other system (e.g. “One or more vehicle systems 140.

“Detailled embodiments are disclosed in this document. The disclosed embodiments should not be taken as a guide. Specific structural and functional details herein should not be taken as limiting. They are provided as a basis for claims and to demonstrate how one skilled in the art can use the aspects in almost any structure. The terms and phrases used in this document are not meant to limit but to give an understanding of possible implementations. FIGS. 1-8 show various embodiments. FIGS. 1-8 show various embodiments, but they are not limited to the application or structure shown.

“The block diagrams and flowcharts in the figures show the architecture, functionality, operation, and possible implementations of methods, systems, and computer programs according to different embodiments. Each block in the block diagrams and flowcharts could be a block, segment, or section of code that contains one or more executable instructions to implement the specified logical function. You should also note that the functions in a block may not occur in the order shown in the figures in certain alternative implementations. Depending on the functionality, it is possible to execute two blocks in succession or in reverse order.

“The systems, components, and/or processes described here can be realized in hardware. You can use any type of apparatus or processing system that is capable of carrying out the procedures described. One example of a combination of hardware and program code is a processing system that can be loaded and executed to control the system so that it executes the methods described. These systems, components, and/or processes can also be embedded in computer-readable storage devices, such as computer program products or other data program storage devices, that are readable by a machine. They can tangibly contain a program of instructions that the machine can execute to perform the methods and processes described. These elements can also be embedded in an app product that contains all the features necessary to implement the methods and which, when loaded into a processing device, is capable of carrying out the methods.

“Furthermore arrangements described herein could take the form a computer program product embedded in one or more computer readable media with computer-readable code stored thereon. You can use any combination of several computer-readable media. Computer-readable media can be either a computer readable signal medium, or a computer readable storage medium. Computer-readable storage medium is a term that can be used. A non-transitory storage media. Computer-readable storage media can include, but are not limited to, electronic, magnetic or optical, electromagnetic, infrared or semiconductor system, apparatus or device or any combination thereof. A computer-readable storage medium could also include a portable computer diskette (HDD), hard drive (HDD), solid-state drive(SSD), read-only (ROM), an eraseable programmable (EPROM) or Flash memory, a compact disc read only (CDROM), digital versatile disk (DVD), an optical storage or magnetic device or any combination thereof. A computer-readable storage medium is any tangible medium that can store or contain a program to be used by an instruction execution system, apparatus or device.

“Program code embedded on a computer-readable media may be transmitted using any medium, including wireline, optical fibre, cable, radio frequency, and any combination thereof. Programming code that is used to carry out operations in accordance with the present arrangements can be written in any combination possible of one or more programming languages. This includes object-oriented languages such as Java? or Smalltalk or C++, as well as conventional procedural languages such as the?C? programming languages or similar languages. The program code can be executed entirely on the user?s computer, partially on the user?s computer as a standalone software package, partly or completely on the remote computer. The remote computer can be connected to the user’s machine through any network type, such as a local network (LAN), a wide-area network (WAN), or to an external computer (e.g., via the Internet through an Internet Service Provider).

“The terms?a?” and?an? “The terms?a? und?an are used herein as one or more than one. As used herein, the terms?an’ and?plurality are understood to mean one or more. The term “plurality” is used herein. As used herein, the term?plurality’ means two or more. The term “another”,? is used herein. As used herein, the term?another’ means at least one more. The terms?including? and/or ?having,? As used herein, the terms comprising (i.e. open language). The phrase “at least one of” is. . . . . . ? ?, as used herein, refers to and includes all combinations of one or more listed items. For example, the phrase “at least one” of A, B, or C is an example. It can include A only, B and C, or any combination of them (e.g. “AB, AC or BC.

“Aspects of this invention can be implemented in other forms without departing form its spirit or essential attributes.” As such, it is important to refer to the following claims rather than the above specification to indicate the scope of the invention.

Summary for “Systems and methods to identify rear signals using machine learning”

Autonomous vehicles as well as various safety/advanced assist systems depend on sensors to analyze the data and perform specific functions, such navigating the environment. The sensors receive data about the environment and interpret it to determine how to proceed or take other actions. A vehicle system interprets the sensor data and perceives the environment. It also interprets the actions, locations and trajectories of objects within the environment, such as vehicles. In addition, the ability to identify the rear indicators signals of nearby vehicles can help with anticipating the trajectories of those vehicles and the dynamic aspects of their environment.

However, it is possible to have a number of difficulties when identifying the rear indicators signals of a vehicle. The difficulty in identifying rear indicators signals is illustrated by the need to accurately determine their locations. Because different vehicles may have different rear signal lights configurations, it can be difficult to accurately determine the locations and states. Additional aspects, such as vehicle movement, blinking patterns, brake light patterns, and other factors, can also add to the difficulty. While it is helpful to identify rear indicators signals for a vehicle when operating the mentioned systems, there are many problems that can lead to inaccurate results.

“Examples of systems and methods for identifying rear signal indicators from a nearby vehicle are described in this document. A signal identification system, for example, monitors nearby vehicles and uses a series camera images to identify rear indicators signals. To monitor nearby vehicles, the signal identification system can be embedded in a vehicle. The camera(s), upon detection of a nearby vehicle, captures a series (16 images) of the rear section of that vehicle. This can be used to identify the current state of turn signals or brake signals. A brake classifier, in one embodiment, accepts raw images as electronic input and then analyzes them using a combination deep learning routines. The brake classifier uses a convolutional neural net to first identify spatial features in the images. It then processes the images and outputs the spatial features. The spatial features are then fed into a long-term memory-recurrent neural network, LSTM-RNN. This iteratively processes images in order to determine whether the brake lights have been activated.

“In addition, the signal identification system uses a turn classifier to determine the turn state. It functions in the same way as the brake classifier. The system transforms images before they are fed into the turn classifier to highlight areas of interest. In one embodiment, images are processed to highlight specific regions (i.e. turn signals) and improve identification. An example of this is a motion compensation algorithm that can be applied to images to create flow images that account relative motion between vehicles. The flow images are then compared to identify differences (e.g. areas with changing pixels intensity). The turn classifier is then notified of the regions of interest in the difference images.

“The turn classifier processes regions of interest in the images using a convolutional neural net to further identify spatial features within them. The images are then processed by a separate long-term memory-recurrent neural network (LSTMRNN), which iteratively processes them to identify temporal information that corresponds with the dynamic flashing state of the turn signals. The signal identification system uses this structure to identify rear indicators signals. It overcomes the above difficulties and improves identification by accounting for temporal variations and changes caused by motion and variable luminance.

“In one embodiment, there is disclosed a signal identification system that can identify rear indicators of nearby vehicles. The signal identification system comprises one or more processors, and a memory that can be communicably coupled with the processors. The memory contains a monitoring module that executes instructions to cause one or more processors to capture signal images of a nearby vehicle in response to detection of the vehicle. An indicator module is part of the signal identification system. It contains instructions that, when executed by one or more processors, cause them to: i) calculate a braking condition for brake lights of nearby vehicles that indicate whether they are currently active by analysing signal images according a brake classifier; and ii) calculate a turn state of rear turn signals nearby vehicles that indicate which turn signals are active by analysing regions of interest in the signal images. The brake classifier is made up of a convolutional neural net and a long-term memory-recurrent neural network (LSTMRNN). Instructions for providing electronic outputs to identify the braking and turn states are included in the indicator module.

“In one embodiment, the disclosure of a non-transitory computer readable medium for identifying rear indicator positions on a nearby vehicle is made. Instructions are stored on the non-transitory computer readable medium that, when executed by one processor, cause one or more processors perform the disclosed functions. Instructions include instructions for computing a brake state for nearby brake lights that determines whether they are currently active. This is done by analysing signal images according to a brake classification. A rear section of the nearby vehicle is captured to capture the signal images. Instructions include instructions for computing a turn status for rear turn signals of nearby vehicles. This indicates which turn signals are currently active by analysing regions of interest from signal images according to turn classifier. The brake classifier is made up of a convolutional neural net and a long-term memory-recurrent neural network (LSTMRNN). Instructions include instructions for providing electronic outputs to identify the braking and turn states.

“In one embodiment, there is a method for identifying rear indicators from a nearby vehicle. In response to detecting a nearby vehicle, the method captures signal images of a portion of the nearby car. The method also includes computing the brake state of nearby vehicles that indicate whether brake lights are active. This is done by analysing the signal images using a brake classifier. The method also includes computing the turn state for rear turn signal of the nearby vehicle. This indicates which turn signals are currently active by analysing regions of interest in the signal images according a turn classifier. The brake classifier is made up of a convolutional neural net and a long-term memory-recurrent neural network (LSTMRNN). This method provides electronic outputs that identify the braking and turn states.

“Systems, methods, and other embodiments for identifying rear signal indicators are disclosed in this document. It can be difficult to identify turn signals and brake signals on vehicles, as we have already mentioned. There are many factors that can make it difficult to identify vehicles. These include variations in lighting conditions, relative movements, color thresholds and differences in vehicle configurations. In some cases, manually-defined feature sets are inaccurate and can be misinterpreted when they encounter the noted variations in luminance or other circumstances. It is not easy to identify rear indicators signals accurately.

In one embodiment, the signal identification system monitors nearby vehicles and uses a series camera images to identify rear indicators signals. The signal identification system is embedded in a vehicle and monitors for vehicles nearby. One aspect of the signal identification system is that it monitors nearby vehicles with cameras integrated into the host vehicle. The cameras detect a nearby vehicle and capture a sequence of images, 16 at a time, of the rear section. This can be used to determine the current state of turn signals or brake signals. The sequence of images is used to analyze the current indicator state and capture temporal information about indicators. Because the turn signals and brake lights can flash, images are taken over a time period to capture a dynamic flashing state of the turn signal.

“In both cases, the images are independently analyzed to determine the turn and braking states. One embodiment of a brake classifier accepts raw images as an electronic input. Then, it analyzes the images using deep learning routines. The brake classifier determines the spatial features in the images first using a convolutional neural net to process them and then outputs these spatial features. The spatial features are then fed into a long-term memory-recurrent neural network, LSTM-RNN. This iteratively processes images with defined spatial features to determine whether the brake lights have been activated.

A turn classifier, which functions in the same way as the brake classifier, determines the turn state. The images are then transformed to highlight areas of interest before being fed into the turn classifier. In one embodiment, images are transformed to highlight specific regions (e.g. turn signals) and improve identification. An example of this is a motion compensation algorithm that can be applied to images to create flow images that account relative motion between vehicles. The flow images are then compared to identify differences (e.g. areas with changing pixels intensity). The turn classifier is then notified of the regions of interest in the difference images.

“The turn classifier processes the regions of particular interest in the images using a convolutional neural net to further identify spatial features within them. The images are then processed by a separate long-term memory-recurrent neural network (LSTMRNN), which iteratively processes them to identify temporal information that corresponds with the dynamic flashing state of the turn signals. The signal identification system uses this structure to identify rear indicators signals. It overcomes the above difficulties and improves identification by accounting for temporal variations and changes caused by motion and variable luminance.

Referring to FIG. 1 illustrates an example of vehicle 100. A’vehicle’ is defined as: Any form of motorized transportation. The vehicle 100 can be described in one or more embodiments as an automobile. Although the arrangements are described with regard to automobiles in this article, it will be clear that other embodiments can be used. The vehicle 100 can be any motorized transport, which, in some cases, may benefit from the identification of a turn state and brake indicators of nearby vehicles, as described herein.

“The vehicle 100 also contains various elements. In some embodiments, it may not be necessary that the vehicle 100 include all elements in FIG. 1. Any combination of elements in FIG. 100 is possible for vehicle 100. 1. Additional elements can be added to the vehicle 100 as shown in FIG. 1. The vehicle 100 can be built without any of the elements in FIG. 1. In FIG. 1, the elements are shown to be located inside the vehicle 100. It will be clear that any one of these elements may be found outside the vehicle 100. The elements may also be physically separated over large distances.

FIG. 1 shows some of the elements that could make up the vehicle 100. 1, and the following figures will be described. However, many elements of FIG. After the discussion of FIGS. 2-8, a description of many elements in FIG. For the sake of simplicity, this description will only include FIGS. 2-8. It will also be appreciated that reference numerals were used where necessary to indicate the corresponding or similar elements in each figure. This is done for clarity and simplicity. The discussion also provides details that will help you understand the various embodiments. However, those skilled in the art will be able to see that embodiments can be used with different combinations of these elements.

“In both cases, the vehicle 100 contains a signal identification device 170 which is used to detect nearby vehicles and identify rear indicators of those vehicles. Further discussion of the figures will make it clearer what these functions and methods are.

“With reference to FIG. “With reference to FIG. Further illustrated is FIG. 1. FIG. 1 shows the signal identification system 170, which includes a processor 110 from vehicle 100. 1. The processor 110 could be part of the signal ID system 170. In other words, the signal recognition system 170 might include a separate processor than the processor 110. Or, the signal identity system 170 could access the processor 110 via a data bus, or another communication pathway. The processor 110 is shown as part of signal identification system 170. In one embodiment, the signal ID system 170 also includes a memory 220 that stores a monitoring and indicator module. The memory 210 can be a random-access memory, read-only memory or flash memory. It stores the modules 220, 230. For example, the modules 220 and 233, are computer-readable instructions which, when executed by processor 110, cause processor 110 to perform various functions described herein.

Accordingly, the monitoring module 220 includes instructions that control the processor 110 to acquire sensor information from, for instance, one or more sensors in the sensor system 120. One embodiment of the sensor data comprises images from a camera 126, which shows an area in front 100 of the vehicle that is likely to detect a vehicle’s rear. Further aspects include the control of multiple cameras 120 of the sensor system 120 at different points on the vehicle 100. This allows for additional views of the environment. In addition, the monitoring module 220 is capable of controlling a Lidar 124 and radar 123 as well as the camera 126. These sensors can be used to determine if a nearby vehicle exists.

“In both cases, the monitoring module 220 monitors an electronic stream sensor data from the camera or other sensors for the nearby car. The monitoring module 220 collects sensor data and analyzes it using vehicle recognition techniques to determine if the nearby vehicle is present. In various embodiments, the monitor module 220 can use image recognition, LIDAR object detection, radar recognition or combination of these techniques to identify the presence of nearby vehicles. You should understand that different forms of vehicle/object recognition have different properties. They can be used in different situations to suit each implementation. There may be different distance thresholds that allow objects to be identified using different recognition techniques. Image recognition is used to identify vehicles within a specified distance, such as 100 feet, according to one embodiment. This allows for additional feature identification, such as turn and brake lights, to be carried out by determining the presence of nearby vehicles.

“Once the vehicle near you is detected or as part of that detection, the monitoring module 220 captures signal images of the vehicle nearby. The monitoring module 220 generally captures signal images for a predetermined time period. The defined time period is in one embodiment 0.5 seconds. However, it may be shorter or longer depending on specific implementation aspects. A lower limit for the defined time period could be chosen to capture at least one turn signal cycle, while an upper bound can be set by the amount of memory available. A frame rate of the camera (126) and/or other factors may also affect the number of images that are captured over the time period. The monitoring module 220 typically acquires 16 images from the series of signals images for a time period of 0.5 seconds.

“Additionally the signal images show a rear section (i.e. the aft portion) of the nearby car so that both the left and right turn lights as well as brake lights are visible in the signal images. While the main implementation is the rear section of a vehicle and its rear indicator signals are described herein, further aspects of the disclosed systems can be used to identify other indicators, such as side turn signals, front turn signals, and side turn signals (e.g. side-view mirror turn signal), etc. In the event that the monitoring module 220 captures signal images with a view from the nearby vehicle, such as partially covering the rear section of a vehicle or occludes some of the rear indicators signals, the monitoring module 220-controls the camera 126 to capture replacement images or proceeds with processing the occluded images. The indicator module 230 is capable of providing at most partial identification.

“Generally, the rear indicators signals of the nearby car include brake lights that indicate whether the vehicle is braking, and left/right turn signal/lights/lights that indicate if an operator of nearby vehicle has activated a left or right turn indicator, or hazards. Referring to FIG. FIG. 3 illustrates an example of different braking and turn states with examples of vehicles. The signal identification system 170 can identify eight possible states that combine braking and turning states. FIG. 3 shows examples (a-(h). The different states are identified by distinct letters. A first letter indicates whether brake lights are active at the moment with?O? indicating off, and?B? Indicating that the brake lights have turned on. The left turn signal is activated by a second letter with the?L? The second letter corresponds to the left turn signal with?L? The second letter indicates that the left turning signal is inactive. The third letter corresponds to the right turn signals and indicates that the right signal is active with??R? While?O? indicates that the right turn signal is inactive for the third letter.” while?O? indicates that the right turn signal for the third letter is inactive.

“Accordingly, the example (a), labeled ‘OOO? All indicator signals are inactive. Example (b), labeled “BOO?” The label?BOO? in Example (b) indicates that both the turn signals and the brake lights are activated. Example (c), labeled “OLO?” This indicates that the left turn signal is active at the moment. Example (d), labeled “BLO?” Both the brake lights as well as the left turn signal must be active. Example (e) marked?OOR? Indicates that only the right turn signal is active. Example (f) marked?BOR The label?BOR? indicates that both the brake lights and the right turn signal are activated. Example (g), labeled “OLR?” Both turn signals are activated and hazards are active, as shown in Example (g). Example (h), labeled “BLR?” This indicates that both the turn signals and brake lights are active, and the hazards are also active. In various aspects, the signal identification software uses the labels in FIG. 3. to encode the output that indicates which rear indicator signals are active.

“With further reference at FIG. “With further reference to FIG. In response to the monitoring modules 220 and 220 detecting nearby vehicles, the indicator module 230 takes the signal images from the monitoring module 220-230. The indicator module 230 receives the signal images from the monitoring module 220 as an electronic input. It then provides an electronic output that determines the turn state and brake state.

“Another matter before we discuss additional aspects of the indicator modules 230, in one embodiment the signal identification system 170 contains a database 244. For example, the database 240 can be an electronic data structure that is stored in the memory 220 or another electronic storage device. It is equipped with routines that the processor 110 can execute to analyze stored data, provide stored data, organize stored data and so forth. In one embodiment, the database 220 stores data that is provided by modules 220 or 230 for various functions. One embodiment of the database 240 contains a brake classifier 250, and a turn classification 260. The database 240 can also contain signal images and/or any other information that is used by modules 220 or 230.

“The classifiers 250 & 260 are computational models that model aspects in relation to images of vehicles and rear-signal indicators. The indicator module 230 uses both the brake classifier 250 (to analyze signal images) and the turn classifier 265 (to determine the turn state and the braking state) to calculate the indicator module. The classifiers 250 & 260 are shown as being stored in the database 240. However, it is important to understand that other components of the classifiers 250 & 260 can be incorporated with the indicator module. The classifiers 250 & 260 are functionally complex components made up of functional blocks and data that function together to indicate the probabilities of different brake and turn state probabilities.

“Also, both the brake classifier 250 (and the turn classifier 266) use a similar combination to process the signal images. The brake classifier 250 (CNN), and the turn classifier 260 (LSTMRNN) both include a convolutional and long-term memory recurrent neural networks (CNN). The CNN works by identifying and extracting spatial features using an iterative process that involves configuring the signal images, pooling the results of the convolving, then repeating the process with the pooled data from the previous iteration. Indicator module 230 implements CNN in this way until a final connected layer outputs a featuremap or other general indications of spatial features of images after several iterations (e.g. 5 iterations). The indicator module230 then uses the CNN’s spatial information as an electronic input to the LSTM. The LSTM is a type recurrent neural network that determines the temporal relationships and other temporal information about spatial features identified by CNN. The LSTM-RNN also includes aspects that adjust for the changes between images to identify dynamic flashing state of turn signals and to account for variations at luminance. The indicator module 230 uses the LSTMRNN to predict the braking and turn states. The indicator module 230 uses this prediction to produce an output that indicates the specific states as probabilities or statistical likelihoods.

“Additionally as noted, the turn classifier 260 and brake classifier 250 separately implement versions the CNN and LSTMRNN. The differences refer to the specific aspects of how each version accepts the signal images, and how they are trained to recognize the signals. The brake classifier 250 is trained by a large number of images of the rear sections of vehicles with brake lights in different states. Images can also have different properties, such as different lighting conditions and color profiles. The data set should be labeled so that the brake classification 250 can use back propagation, or another technique, to train the CNN and LSTM-RNN of brake classifier 250 to correctly identify braking states.

The turn classifier 260 is similarly trained by a large set of images that includes vehicles with different turn signals (e.g. left turn, right turn, hazards), which are also labeled for backpropagation or other training methods. The indicator module 230 also processes the signal images before they are fed into the turn classifier 265. The indicator module 230 compensates, for example, for the possibility that the vehicle 100 and nearby vehicles captured in signal images are moving. The indicator module 230 can process the signal images to align nearby vehicles between successive signal images. It then compares the images to identify differences, produces difference images and extracts regions of interest from these difference images. These are the areas of left and right turn signal signals. The indicator module 230 can then feed regions of interest from difference images to the turn classifier 265. This allows the turn classifier 265 to focus on turn signals and avoid variations (e.g. reflections, etc.). In the signal images, or other aberrations that could distract from the identification of the turn state.”

“We will also discuss additional aspects of the neural network in detail in relation to the following figures. It should be noted that the indicator module 220 performs both the identification of brake states and turn states simultaneously. While the braking state is the first topic, it is important to understand that the order of the indicators is not dependent on or particularly important. The indicator module 230 controls both the brake classifier 250 (which outputs predictions of the braking and turn states, respectively) and the turn classifier 265. The indicator module230 can further process the states provided to determine the soft states. The indicator module 230 can also provide information about the probability that each signal is active. The indicator module 230, on the other hand, can make a decision as to whether the signals are active. 3.”

In either case, the indicator 230 outputs information that indicates the turn state or braking state. The indicator module 230 may provide outputs to either the autonomous driving module 160 or one of the vehicle system 140. This allows the vehicle 100 to be controlled in various ways. The indicator module 230 can generate indicators for a driver based on the turn state and/or braking state to notify the driver about actions of nearby vehicles. The indicator module 230, for example, can display graphics on a head-up display, in the car, rear-view mirror or any other display. Alternately, the indicator module can give audible warnings about the turn and/or brake state.

As previously mentioned, the indicator module230 can provide information about the turn state and/or braking status to the autonomous driving module 160. The autonomous driving module 160 uses the vehicle’s turn state and/or braking state in order to provide information about autonomous operation and advanced driver assistance components regarding objects/obstacles and trajectories.

FIG. 4. FIG. FIG. 4 shows a flowchart for a method 400 which is used to identify a turn state or a brake state of a nearby car as a function the rear indicators signals. The signal identification system 170 in FIGS. will provide an overview of method 400. 1. and 2. Method 400 is discussed with the signal ID system 170. However, it is important to understand that the method 400 can be implemented in any system.

“At 410 the monitoring module 220 monitors vehicles near the vehicle 100. One embodiment of the monitoring module 220 uses image recognition techniques to monitor video images taken by a camera 126. The monitoring module 220 generally determines whether the nearby vehicle is within the vehicle 100’s defined range. The monitoring module 220 can determine if the orientation of the vehicle in front is acceptable (e.g., facing away). This information can be used to further analyze the data according to the method 400. The monitoring module 220 also has the ability to monitor additional sensors like radar and LIDAR in order detect nearby vehicles and/or add detection using camera 126. The monitoring module 220 monitors the surrounding environment 100 to identify the presence of nearby vehicles so that the rear indicator signals can also be identified. A general configuration of the vehicle 100 may have the camera 126, which has a view directed in front 100. The camera 126 can have a different field or additional cameras can be added with 360-degree views around the vehicle 100.

“Moreover, although a single vehicle is being discussed, it is important to remember that signal identification system 170 can simultaneously monitor, identify and determine rear indicators states for multiple vehicles nearby.”

“At 420 the monitoring module 220 captures signal pictures of a nearby vehicle detected as 410. One embodiment captures signal images in a series of images that are taken over a time period. The monitoring module 220 can thus acquire temporal information about nearby vehicles that allows it to detect changes in turn signals or brake signals. The monitoring module 220 can capture sequences of images taken by the camera to record the cycle of turn signals. The monitoring module 220 can also acquire the sequence of images in order to verify that the brake lights are activated and not just briefly. In one embodiment, the monitoring module 220 can capture images over a time period that corresponds to a standard blinking pattern for vehicle turn signals (e.g. >=0.5 seconds). The frame rate of camera 126 can influence the number of images in the sequence of signal images. The monitoring module 220 can be set up to capture 16 images. If the frame rate is 30 frames per second, the time period is 0.5 seconds. If the frame rate is lower or higher, the number of images can be increased or decreased to capture the complete cycle of the turn signal’s dynamic flashing. The signal images are generally captured at 30 frames per second over a time period that corresponds to at least one cycle (e.g. on and off) of the turn signal.

“Continuing with method 400, as noted previously, the brake and turn classifier 250 share a similar structure but are tailored for the specific signal to be identified and are also trained for that signal. FIG. 4. Block 430 corresponds to processing by brake classifier 250, while block 470 corresponds to processing by turn classifier 226. Blocks 440, 455, and 460 are pre-processing signals images before they are electronically processed by the turn classifier. 260.

“At 430 the indicator module 230 analyses the signal images with the brake classifier 250. The indicator module 230 analyses the signal images using brake classifier 250. This uses two distinct neural networks to process the signals. The indicator module 230 extracts the spatial features of nearby vehicles from signal images. This is done using a convolutional neural system that can be trained to recognize brake lights and distinguish between different features. In one embodiment, the indicator 230 condenses the signal images into layers spatial features and pools them over multiple iterations. The CNN of the brake classifier 250 is used to identify the spatial features and generate an electronic output that corresponds.

“The indicator module 230 then uses the spatial features to input a long-short-term memory recurrent neural (LSTMRNN). One embodiment provides the spatial features along with the signal pictures to allow for labeling aspects of the signal photos. The indicated spatial features are used to identify the locations and types features in the signal images. The indicator module 230 implements the LSTMRNN for brake classifier 250 to determine or learn about temporal dependencies among signal images that indicate the braking status of nearby vehicles. The LSTMRNN aspect is a way to analyze signal images and account for the temporal relationships among spatial features as they progress. The indicator module 230 is able to determine the spatial features (e.g. brake lights) within signal images. It can also analyze spatial features across signal images for changes or general characteristics that could indicate whether brake lights are active.

“Fig. 5. FIG. FIG. 5 shows a diagram showing an example structure 500 for the brake classifier 250, and the turn classifier 262, as described herein. The example structure 500 of the classifiers 250 & 260 has a convolutional neural net 510 and a long-term memory (LSTM), recurrent neuralnet (RNN), 520. Also known as LSTM520. CNN 510 accepts input as a series images. The input is condensed and pooled in five different iterations 530 to 540 to 550 and 560 to the CNN 510. It is important to note that iteration 55 includes multiple convolutions and not just one convolution.

The CNN 510 usually convolve the image or the results of a previous iteration by configuring a filter or set filters across the input to produce a filtered outcome. Filters are used to identify different aspects of an image, such as color patterns and shapes. A pooling layer, for instance, is a nonlinear down-sampling like max pooling. The pooling layers divide an input image into non-overlapping regions. Each region is characterized according, for instance, to predominant aspects of the filtering results of the convolving layer. The CNN 510 reduces the input size to further identify spatial features. As shown in FIG. 5, a fully connected layer (fc6) is created. 5. This is the output of the CNN 510 and the LSTM 520. CNN 510 outputs a characterization for spatial features in signal images for the braking classification 250 and a characterization for regions of interest for turn classifier 260. The CNN 510 also executes each image separately.

“The LSTM 520 consists of an LSTM functional blocks 570, which includes several nonlinear activation gate and further functional elements. The LSTM 520 is used to determine the temporal relationships (i.e. information) between images in a series of signal images. Using the LSTM 520 allows for long-term dependencies (i.e. the signal images) to be maintained during analysis. This is without losing information from the dynamic flashing state of the turn signals.

FIG. 6. FIG. 6 shows that the LSTM functional blocks 570 can accept inputs of Xt and ht-1. Xt is the spatial feature that was displayed by fc6 at the CNN 510 at the time t. Input ht-1 is a hidden unit that was used in a previous time step. The LSTM 570 also estimates the hidden unit ht at each successive time step (e.g. each iteration LSTM block 570). This is passed to the next iteration as input ht-1. The LSTM block 570 also receives stored data from a Ct-1 memory cell that contains information from a previous version. Each iteration, the memory cell is updated with new calculated information and passed on to the next iteration. FIG. FIG. 6 shows that the LSTM block570 has different gates. One embodiment of the LSTM Block 570 contains a forget gate, ft that determines what to discard from xt or ht-1. The forget gate is a sigmoid (?) function. This function, for example, outputs values between 0 and 1 and performs elementwise product with previous memory cell states Ct-1 to determine which information to discard or keep.

“Additionally it updates information to Ct and the hyperbolictangent (tan-h) layer. It and gt control what to recall from xt or ht-1 and add the values to give the memory cell Ct for the next iteration. The LSTM block 570 updates the memory cell Ct and determines the hidden state ht at every iteration. A gate is used to compute and weight the memory state Ct via Tan h. The output is then sent to a fully connected layer, fc8, as illustrated in FIG. 5. This means that the output from each iteration is used for computing a class probability of 580, as shown in FIG. 5. Because sufficient temporal information is stored within the LSTM block570, a final output of block 580 can be used to account for the temporal dependencies in an input sequence. This allows the example classifier 500 to account for spatial dependencies when determining rear signal indicator states. The brake classifier 250 also analyzes signal images in a similar manner. It uses a braking CNN as well as a braking LSTMRNN to identify brake light states.

“With reference to FIG. “With continued reference to FIG. This is because the signal images may be captured while the nearby vehicle or vehicle 100 is moving. Therefore, the position of the nearby car within each signal image can differ. The block 450 signal images can be distorted due to misalignment of nearby vehicles between images in different orientations and positions. The indicator module 230 adjusts for movement by processing signal images to create flow images. These images are then transformed to align the nearby car between successive signal images. The indicator module 230 uses a scale-invariant feature transformation flow algorithm (SIFT), to transform the signal images into flow images at 440.

“At 450 the indicator module 230 generates different images from flow images. The indicator module 230, in one embodiment, compares the flow images and generates differences images. The difference images are a subtraction of successive flow images. Thus, the difference pictures indicate areas of changed pixels among successive flow images. The difference images are created to highlight areas of change in successive images so that the turn classifier can concentrate on the areas.

“Finally, FIG. 7 shows how the indicator module (230) generates the difference images. 7 shows an example 700 of compensating motion, as described at 440, and then generating the difference images from that information as explained at 450. Image (a) is a previous image from the signal images, while image (b), is the current image for which indicator module 230 generates the difference image. The indicator module 230 processes image (b) first to align the vehicle in the vicinity with the posture and position of the picture in image (a). Image (d) shows the flow image, also known as the warped picture. Image (c) depicts how image (b) was shifted to compensate. Image (e) is a difference image created by the indicator module230 when it performs an absolute comparison between image d and image a. The image (e), which is a difference picture, illustrates how a right turning signal is highlighted in the difference image (e). This is because the indicator module 230 flashes to an on state from images (a and (b).

“At 460 the indicator module 230 extracts areas of interest from the different images. The indicator module 230 extracts regions from the difference images that correspond with the left and right turn signals in one embodiment. Referring to FIG. Referring to FIG. 8.8, a difference image 800 is shown that corresponds to image (e). 7 is further illustrated. FIG. 7 illustrates how the indicator module 230 overlays a grid over the difference image. 8. The indicator module 230 uses the grid pattern to identify regions (i.e. localized sub-portions 800) that correspond with the left and right turn signals. FIG. 8. The indicator module 230 identified the regions 810-820. The indicator module 230 also repeats the process for the different images. The indicator module 230 creates a separate set of regions of interest that correspond with the left and right turn signals. The electronic inputs to the turn classifier 265, 230 are made up of the regions of interest. They replace the full signal images that were used by the brake classification 250.

“At 470 the indicator module 230 analyses the regions of interest derived using signal images and the turn classifier 265. The indicator module 230, in one embodiment, analyzes the regions-of-interest (ROIs), using the turn classifier 265. It does this by applying two distinct neural networks. 5. The indicator module 230 uses a turn CNN from the turn classifier 262 to generate spatial features as an electronic output.

The indicator module 230 then uses the ROIs and spatial features as input to a turn short-term memory recurrent neural net (LSTMRNN). One embodiment provides the spatial features along with the ROIs to allow for labeling aspects of the ROIs. The indicated spatial features are used to identify the locations and types within the ROIs. The indicator module 230 implements the turn LSTM -RNN function for the turn classifier 260 to determine or learn about temporal dependencies among the ROIs that indicate the vehicle’s turn state. The indicator module 230 is able to determine spatial features (e.g. turn lights) within the ROIs. It can also analyze spatial features across ROIs for changes or general characteristics that could indicate whether different turn lights are active.

“At 480 the indicator module230 provides electronic outputs of the turn state and braking state. In one embodiment, an indicator module 230 communicates the state by electronically communicating turn state and brake state to one or more vehicles 140 or modules 160 (e.g. module 160). The indicator module 230 also displays the state information to the driver to notify them about nearby vehicles.

“The indicator module 230, for example, uses the turn and braking states to determine how to modify the operating parameters of one or more vehicle system 140. If the indicator module230 determines that brake lights are active, an automatic collision avoidance device can be activated if it is within a certain range of the vehicle. If the indicator module230 detects that the left turn signal has been activated, the autonomous driving module 160 can plan a route around nearby vehicles. The output of the indicator module230, which indicates the status of the rear indicators signals, can be used to inform many systems of the vehicle 100.

“FIG. “FIG. 1” will be described in detail as an example environment where the system and methods described herein could operate. The vehicle 100 can be configured to select between an autonomous mode and one or more semi-autonomous operating modes. This switching can be done in a way that is suitable, as well as other methods later developed. ?Manual mode? Manual mode means that the vehicle’s navigation and/or maneuvering is done according to inputs from the user (e.g. a human driver). The vehicle 100 may be a traditional vehicle, but it can also be configured to work in a manual mode.

“The vehicle 100 can be described as an autonomous vehicle in one or more embodiments. “Autonomous vehicle” is the term used herein. A vehicle that can operate in autonomous mode. ?Autonomous mode? Autonomous mode refers to the ability to navigate and/or maneuver the vehicle 100 along a route using one or more computing devices to control the vehicle 100 without the need for human input. One or more embodiments of the vehicle 100 can be fully or partially automated. One embodiment of the vehicle 100 has one or more semi-autonomous operating modes. In this mode, one or more computing systems perform part of the navigation or maneuvering of a vehicle along a route. A vehicle operator (i.e. driver) provides inputs for the vehicle to perform a portion or navigation or maneuvering of 100 along a route.

“The vehicle 100 may include one or more processors 110. The processor(s), 110 can be the main processor of the vehicle 100 in one or more arrangements. The processor(s), 110 could be an electronic control device (ECU). One or more data storage 115 can be included in vehicle 100 to store one or more types data. The data store 115 may contain volatile or non-volatile memories. You can choose from RAM (Random Access Memory), Flash Memory, PROM (Programmable Read Only Memory), EPROM [Electrically Erasable Programmable Read Only Memory], EPROM (Programmable Read Only Memory), EPROM [Electrically Erasable Programmable Read Only Memory], EPROM (Electrically Erasable Programmable Read Only Memory], EPROM (Electrically Erasable Programmable Read Only Memory), registers and magnetic disks or optical disks as well as hard drives or any other storage media or any combination of these or any other suitable medium or any combination. The data store (115) can be an integral part of the processor(s), 110 or the data store (115) can be operatively connected with the processor(s), 110 for use there. The term “operatively connected” is used here. As used in this description, the term “operatively connected” can refer to direct or indirect connections as well as connections that are not directly physical.

“Map data 116 can be included in one or more arrangements. Map data 116 may include maps from one or more geographical areas. The map data 116 may include information or data about roads, traffic control devices and road markings. Map data 116 may be in any format. The map data 116 may include aerial views of the area in certain cases. The map data 116 may include ground views, including 360-degree views. The map data116 can contain measurements, dimensions, distances and/or information about one or more items in the map 116, and/or relative items in the map 116. A digital map can be included in the map data 116 with information about road geometry. Map data 116 may be of high quality or highly detailed.

“In one or several arrangements, the map data (116) can include one or multiple terrain maps 117. Terrain map(s), 117 can contain information about ground, terrain roads, surfaces and/or other features in one or more geographical areas. The terrain map(s), 117 can also include elevation data for one or more geographical areas. Map data 116 may be of high quality and/or very detailed. One or more terrain maps 117 can be used to define ground surfaces. These can include roads and unpaved roads.

“In one or multiple arrangements, the map data (116) can include one or several static obstacle maps (118). Information about static obstacles can be included in the static obstacle map(s), 118. What is a “static obstacle?”? A static obstacle is a physical object that does not change in its position or significantly change over time. It also does not have a change in size. Static obstacles can be trees, buildings and medians. Static obstacles may be objects above ground. One or more static obstacles can be included in the static map(s), 118. They can include location data, size, dimension, material data and/or any other data. Static obstacle map(s), 118 can contain measurements, dimensions, distances and/or information about one or more static barriers. The static obstacle map(s), 118 can be very detailed and/or high-quality. You can update the static obstacle map(s), 118 to reflect any changes in a mapped area.

“The one or more data storages 115 can contain sensor data 119. “Sensor data” is used in this context. Any information regarding the sensors the vehicle 100 has, including their capabilities, is called “sensor data”. The vehicle 100 may include the sensor system 120, as will be explained later. The sensor data 119 may relate to any one or more sensors in the sensor system 120. In one or more arrangements, sensor data 119 may include information about one or more LIDAR sensor 124 of the sensor network 120.

“In certain instances, at most a portion the map data (116) and/or sensor data (119) can be found in one or more of the data stores 115 located aboard the vehicle 100.” Alternativly, or in conjunction, at least one-fifth of the map data (116) and/or sensor data (119 can be found in one or more remote data stores 115.

“The vehicle 100 may include the sensor system 120, as noted above. One or more sensors can be included in the sensor system 120. ?Sensor? Any device, component, or system that detects, and/or senses something. One or more sensors can be set up to detect and/or sense something in real-time. The term “real-time” is used herein. “Real-time” refers to a level in which a user or system perceives that is sufficiently immediate to make a decision or process, or to allow the processor to keep pace with an external process.”

“In the case of a sensor system 120 that includes multiple sensors, the sensors may work independently from one another. You can also use two or more sensors together. The two or more sensors could form a sensor network. The sensor system 120, or one or more sensors, can be connected to the processor(s), 110, the data store(s), 115 and/or another element in the vehicle 100 (including any elements shown in FIG. 1). “1.

“The sensor system 120 may include any type of sensor. Here are a few examples of various types of sensors. It will be clear that the descriptions are not limited to the specific sensors. One or more vehicle sensors 121 can be included in the sensor system 120. The vehicle sensor(s), 121 can sense, detect, or determine information about the vehicle 100. The vehicle sensor(s), 121 can be set up to sense and/or detect changes in the vehicle’s position or orientation 100. This could include, for instance, using inertial acceleration. One or more vehicle sensors 121 may include one or several accelerometers, one, or more gyroscopes and an inertial measuring unit (IMU). The vehicle sensor(s), 121 can be set up to sense, detect and/or sense one or several characteristics of the vehicle 100. The vehicle sensor(s), 121 may include a speedometer for determining the current vehicle speed 100.

“Alternatively or additionally, the sensor 120 can also include one or more environment sensors, 122 which are configured to sense and acquire driving environment data. What is driving environment data? Data or information about the environment in which an autonomous car is located, or one or more parts thereof. One or more environment sensors 122, for example, can be used to detect, quantify, and/or sense any obstacles in the vehicle’s 100 exterior environment. They also have information/data about these obstacles. These obstacles can be either stationary or dynamic. One or more environment sensors (122) can be set up to detect, measure and quantify other objects in the vehicle’s external environment 100. These include traffic signs, traffic signals, lane markers, traffic signs, traffic signs, crosswalks, curbs close to the vehicle 100, and off-road objects.

Here are several examples of sensors that make up the sensor system 120. These example sensors could be part of one or more sensors for the environment 122 or one or two vehicle sensors 121. It will be clear that the embodiments described here are not exclusive to the specific sensors.

“An example is that the sensor system 120 may include one or two radar sensors 123, one, or more LIDAR scanners 124, and/or one, or more cameras 126. One or more cameras 126 may be either high dynamic range (HDR), or infrared cameras (IR) cameras.

“The vehicle 100 may include an input system 130. “Input system” is a device, component, system, element or group that allows information/data to be entered into a vehicle. An “input system” is any device, component or system that allows information/data to enter a machine. A vehicle passenger can input data to the input system 130 (e.g. A driver or passenger. An output system 135, which can be included in vehicle 100, is possible. An “output system” is a device, component, or arrangement that allows information/data to be presented to a vehicle passenger. An?output system? refers to any device, component, arrangement, or group thereof that allows information/data to be presented/presented to a vehicle passenger (e.g. a person, a vehicle passenger, etc.).”

“The vehicle 100 may include one or more vehicles 140. FIG. shows several examples of one or more vehicle system 140. 1. The vehicle 100 may include more, less, or different systems. Although specific vehicle systems may be identified separately, any or all of them can be combined or separated via hardware and/or the software contained within the vehicle 100. A vehicle 100 may include a propulsion and braking system 141, a steering system, 143, throttle systems 144, throttle system, throttle system, 144, a transmission, 145, signaling system, 146, and/or navigation system. These systems may include any combination of components and devices that are now or later developed.

“The navigation system 147 may include one or more applications and/or combinations thereof now known or later developed. It can be used to determine the geographical location of the vehicle 100, or to determine a travel route 100 for it. One or more mapping applications can be used to determine the travel route 100 for the navigation system 147. The navigation system 147 may include a global or local positioning system as well as a geolocation system.

“The processor(s), 110, the signal recognition system 170 and/or autonomous driving module(s), 160 can be operatively linked to communicate with various vehicle systems 140 or individual components thereof. Referring to FIG. 1. The processor(s), 110 or the autonomous driving module(s), 160 can be in communication to receive and/or send information from various vehicle systems 140 to regulate movement, speed, maneuvering, direction, and so on. 100. These vehicle systems 140 may be controlled by the processor 110, the signal identification 170 and/or the autonomous driver module(s), 160. They may also be fully or partially autonomous.

“The processor(s), 110, the signal recognition system 170 and/or autonomous driving module(s), 160 can be operatively linked to communicate with various vehicle systems 140 or individual components thereof. Referring to FIG. 1. The processor(s), 110, signal identification system 170 and/or autonomous driving module(s), 160 can all be in communication to transmit and/or receive information to the various vehicle systems 140. This information is used to control movement, speed and maneuvering as well as direction. 100. These vehicle systems can be controlled by the processor 110, the signal identification 170 and/or the autonome driving module(s), 160.

“The processor(s), 110, the signal ID system 170 and/or autonomous driving module(s), 160 may be able to control navigation and/or maneuvering in the vehicle 100 by controlling one of the vehicle systems 140 or components thereof. The processor(s), 110, signal identification system 170, or the autonomous driving module(s), 160 can control the vehicle’s direction and/or speed 100 when it is operating in autonomous mode. The processor(s), 110, signal identification system 170 and/or autonomous driving module(s), 160 can cause the vehicle 100’s speed to increase (e.g. by increasing fuel supply to the engine), decrease (e.g. by applying brakes and/or decreasing fuel supply to the engine) and/or change its direction (e.g. by turning the front wheels). One embodiment of the signal identification system 170 collects data from the processor 110 or the autonomous driving modules 160 about the control signals that cause the vehicle’s acceleration, deceleration, and other maneuvers. It can also determine why the autonomous driver module 160 caused the maneuvers. As used herein, ?cause? or ?causing? It is used to cause, make, compel, direct and command an event or act, or at least to be in a position where such event/action may occur.

One or more actuators 150 can be included in the vehicle 100. Any element or combination thereof that can modify, adjust, and/or alter any of the vehicle’s 140 systems or their components to respond to signals or inputs from either the autonomous driving module (s) 160 or the processor(s 110) 150 may be considered an actuator. Any actuator can be used. One or more actuators 150 could include motors and pneumatic actuators, hydraulic pistons and relays. Solenoids, solenoids and/or piezoelectric actuators are just a few examples.

“The vehicle 100 may include one or more modules. At least some of these are described herein. Modules can be implemented in computer-readable code that executes one or more of these processes when executed by a 110 processor. Either one or more modules can be part of the processor(s), 110 or any combination of them can be executed by and/or distributed to other processor systems to which the 110 is operatively connected. Modules can contain instructions (e.g. program logic) that are executable by one of the processor(s). Alternately, or in combination with one or more data stores 115, such instructions may be contained.”

“In one or more arrangements one or more modules described herein may include artificial or computation intelligence elements, such as fuzzy logic, neural network, or other machine-learning algorithms. In one or more arrangements, any one or more modules can be distributed among the many modules listed herein. Two or more modules can be combined to form a single module in one or more of these arrangements.

“The vehicle 100 may include one or more autonomous-driving modules 160. The autonomous driving module(s), 160 can receive data from the sensor 120 and/or other types of systems capable of recording information about the vehicle 100, and/or its external environment 100. The autonomous driving module(s), 160 can generate driving scene models using such data in one or more of several arrangements. The vehicle’s position and speed can be determined by the autonomous driving module(s), 160. The 160-bit autonomous driving module can locate obstacles and other environmental features, such as traffic signs, trees, shrubs or neighboring vehicles, pedestrians, etc.

“The autonomous driving module(s), 160 can be configured for receiving, and/or determining location information for obstacles in the vehicle’s external environment 100 for use by processor(s), 110 and/or one of the modules described in this document to estimate position, orientation, and vehicle position in global coordinates using signals from a plurality satellites or any other data or signals that could be used in determining the vehicle’s current state 100 or its position with respect to its environment 100 for use in creating a map, or determining the vehicle 100 relative to map data.

“The autonomous driving module(s), 160 can be used either alone or in conjunction with the signal identification 170 to determine travel route(s), current autonomous maneuvers for vehicle 100, future autonomous maneuvers and/or modifications of current autonomous driving tactics based on data obtained by the sensor 120, driving scene models and/or any other suitable source. ?Driving maneuver? A driving maneuver is one or more actions that alter the vehicle’s movement. Driving maneuvers can include accelerating, decelerating and braking, turning, moving along a lateral path of the vehicle 100 or changing travel lanes. Configuration of the autonomous driving module(s), 160 allows for certain driving maneuvers to be implemented. These autonomous driving maneuvers can be made possible by the autonomous driving module(s). 160. As used herein, ?cause? or ?causing? The act of causing an event or other action. The autonomous driving module(s), 160 can be set up to perform various functions, such as transmitting data to, receiving data from, interfacing with, or controlling the vehicle 100, or any other system (e.g. “One or more vehicle systems 140.

“Detailled embodiments are disclosed in this document. The disclosed embodiments should not be taken as a guide. Specific structural and functional details herein should not be taken as limiting. They are provided as a basis for claims and to demonstrate how one skilled in the art can use the aspects in almost any structure. The terms and phrases used in this document are not meant to limit but to give an understanding of possible implementations. FIGS. 1-8 show various embodiments. FIGS. 1-8 show various embodiments, but they are not limited to the application or structure shown.

“The block diagrams and flowcharts in the figures show the architecture, functionality, operation, and possible implementations of methods, systems, and computer programs according to different embodiments. Each block in the block diagrams and flowcharts could be a block, segment, or section of code that contains one or more executable instructions to implement the specified logical function. You should also note that the functions in a block may not occur in the order shown in the figures in certain alternative implementations. Depending on the functionality, it is possible to execute two blocks in succession or in reverse order.

“The systems, components, and/or processes described here can be realized in hardware. You can use any type of apparatus or processing system that is capable of carrying out the procedures described. One example of a combination of hardware and program code is a processing system that can be loaded and executed to control the system so that it executes the methods described. These systems, components, and/or processes can also be embedded in computer-readable storage devices, such as computer program products or other data program storage devices, that are readable by a machine. They can tangibly contain a program of instructions that the machine can execute to perform the methods and processes described. These elements can also be embedded in an app product that contains all the features necessary to implement the methods and which, when loaded into a processing device, is capable of carrying out the methods.

“Furthermore arrangements described herein could take the form a computer program product embedded in one or more computer readable media with computer-readable code stored thereon. You can use any combination of several computer-readable media. Computer-readable media can be either a computer readable signal medium, or a computer readable storage medium. Computer-readable storage medium is a term that can be used. A non-transitory storage media. Computer-readable storage media can include, but are not limited to, electronic, magnetic or optical, electromagnetic, infrared or semiconductor system, apparatus or device or any combination thereof. A computer-readable storage medium could also include a portable computer diskette (HDD), hard drive (HDD), solid-state drive(SSD), read-only (ROM), an eraseable programmable (EPROM) or Flash memory, a compact disc read only (CDROM), digital versatile disk (DVD), an optical storage or magnetic device or any combination thereof. A computer-readable storage medium is any tangible medium that can store or contain a program to be used by an instruction execution system, apparatus or device.

“Program code embedded on a computer-readable media may be transmitted using any medium, including wireline, optical fibre, cable, radio frequency, and any combination thereof. Programming code that is used to carry out operations in accordance with the present arrangements can be written in any combination possible of one or more programming languages. This includes object-oriented languages such as Java? or Smalltalk or C++, as well as conventional procedural languages such as the?C? programming languages or similar languages. The program code can be executed entirely on the user?s computer, partially on the user?s computer as a standalone software package, partly or completely on the remote computer. The remote computer can be connected to the user’s machine through any network type, such as a local network (LAN), a wide-area network (WAN), or to an external computer (e.g., via the Internet through an Internet Service Provider).

“The terms?a?” and?an? “The terms?a? und?an are used herein as one or more than one. As used herein, the terms?an’ and?plurality are understood to mean one or more. The term “plurality” is used herein. As used herein, the term?plurality’ means two or more. The term “another”,? is used herein. As used herein, the term?another’ means at least one more. The terms?including? and/or ?having,? As used herein, the terms comprising (i.e. open language). The phrase “at least one of” is. . . . . . ? ?, as used herein, refers to and includes all combinations of one or more listed items. For example, the phrase “at least one” of A, B, or C is an example. It can include A only, B and C, or any combination of them (e.g. “AB, AC or BC.

“Aspects of this invention can be implemented in other forms without departing form its spirit or essential attributes.” As such, it is important to refer to the following claims rather than the above specification to indicate the scope of the invention.

Click here to view the patent on Google Patents.

How to Search for Patents

A patent search is the first step to getting your patent. You can do a google patent search or do a USPTO search. Patent-pending is the term for the product that has been covered by the patent application. You can search the public pair to find the patent application. After the patent office approves your application, you will be able to do a patent number look to locate the patent issued. Your product is now patentable. You can also use the USPTO search engine. See below for details. You can get help from a patent lawyer. Patents in the United States are granted by the US trademark and patent office or the United States Patent and Trademark office. This office also reviews trademark applications.

Are you interested in similar patents? These are the steps to follow:

1. Brainstorm terms to describe your invention, based on its purpose, composition, or use.

Write down a brief, but precise description of the invention. Don’t use generic terms such as “device”, “process,” or “system”. Consider synonyms for the terms you chose initially. Next, take note of important technical terms as well as keywords.

Use the questions below to help you identify keywords or concepts.

  • What is the purpose of the invention Is it a utilitarian device or an ornamental design?
  • Is invention a way to create something or perform a function? Is it a product?
  • What is the composition and function of the invention? What is the physical composition of the invention?
  • What’s the purpose of the invention
  • What are the technical terms and keywords used to describe an invention’s nature? A technical dictionary can help you locate the right terms.

2. These terms will allow you to search for relevant Cooperative Patent Classifications at Classification Search Tool. If you are unable to find the right classification for your invention, scan through the classification’s class Schemas (class schedules) and try again. If you don’t get any results from the Classification Text Search, you might consider substituting your words to describe your invention with synonyms.

3. Check the CPC Classification Definition for confirmation of the CPC classification you found. If the selected classification title has a blue box with a “D” at its left, the hyperlink will take you to a CPC classification description. CPC classification definitions will help you determine the applicable classification’s scope so that you can choose the most relevant. These definitions may also include search tips or other suggestions that could be helpful for further research.

4. The Patents Full-Text Database and the Image Database allow you to retrieve patent documents that include the CPC classification. By focusing on the abstracts and representative drawings, you can narrow down your search for the most relevant patent publications.

5. This selection of patent publications is the best to look at for any similarities to your invention. Pay attention to the claims and specification. Refer to the applicant and patent examiner for additional patents.

6. You can retrieve published patent applications that match the CPC classification you chose in Step 3. You can also use the same search strategy that you used in Step 4 to narrow your search results to only the most relevant patent applications by reviewing the abstracts and representative drawings for each page. Next, examine all published patent applications carefully, paying special attention to the claims, and other drawings.

7. You can search for additional US patent publications by keyword searching in AppFT or PatFT databases, as well as classification searching of patents not from the United States per below. Also, you can use web search engines to search non-patent literature disclosures about inventions. Here are some examples:

  • Add keywords to your search. Keyword searches may turn up documents that are not well-categorized or have missed classifications during Step 2. For example, US patent examiners often supplement their classification searches with keyword searches. Think about the use of technical engineering terminology rather than everyday words.
  • Search for foreign patents using the CPC classification. Then, re-run the search using international patent office search engines such as Espacenet, the European Patent Office’s worldwide patent publication database of over 130 million patent publications. Other national databases include:
  • Search non-patent literature. Inventions can be made public in many non-patent publications. It is recommended that you search journals, books, websites, technical catalogs, conference proceedings, and other print and electronic publications.

To review your search, you can hire a registered patent attorney to assist. A preliminary search will help one better prepare to talk about their invention and other related inventions with a professional patent attorney. In addition, the attorney will not spend too much time or money on patenting basics.

Download patent guide file – Click here