Communications – Ralf Graefe, Florian Geissler, Rainer Makowitz, Intel Corp

Abstract for “Sensor network enhancement mechanisms”

“Systems, methods and computer-readable media for wireless sensor networks (WSNs), include vehicle-based WSNs, are provided. One or more fixed sensors are included in a road side unit (RSU). They cover different areas of a defined coverage area. The RSU uses sensors to collect sensor data representative of objects within the coverage area. It tracks objects (e.g. vehicles) within the coverage area and determines areas that are not covered adequately by the sensors (e.g.?perception gaps?). The RSU will request sensor data from the object’s onboard sensors if it finds an object in or near a perception gap. The RSU collects sensor data from the object and then uses that data to supplement the knowledge of the RSU (i.e.?filling the perception gap?). Other embodiments can be disclosed and/or claimed.”

Background for “Sensor network enhancement mechanisms”

“The background description contained herein is intended to present the context of disclosure. Except as otherwise stated, the disclosures described in this section do not constitute prior art for the claims made in this application. They are not considered prior art and are not included in this section.

Computer-assisted (or (semi)autonomous) driving (CA/AD), vehicles can include various technologies for perception such as camera feeds or sensory information. The European Technology Standards Institute (ETSI) publishes an Intelligent Transport Systems (ITS) standard, which includes telematics and various types of communications between vehicles (e.g., vehicle-to-vehicle (V2V)), between vehicles and fixed locations (e.g., vehicle-to-infrastructure (V2I)), between vehicles and networks (e.g., vehicle-to-network (V2N)), between vehicles and handheld devices (e.g., vehicle-to-pedestrian (V2P)), and the like. Dedicated short-range communications (DSRC) and/or cellular vehicle-to-everything (C-V2X) protocols provide communications between vehicles and the roadside infrastructure. Cooperative-ITS (CITS) could support fully autonomous driving, including wireless short-range communications (ITSG5) that are dedicated to automotive ITS and road transportation and traffic telematics. C-ITS could provide connectivity between road participants as well as infrastructure.

“Disclosed embodiments relate to sensor networks, in particular sensor networks for vehicular apps. Many vehicle service providers (e.g. mapping, navigation and traffic management) are available. Communication service providers (e.g. C-V2X and DSRC) are also available. Sensor data is used to provide accurate, up-to-date service. The Safespot project has a Local Dynamic Map, which provides real-time information about vehicles and traffic on roads. These services can receive sensor data from both fixed sensor arrays and vehicle-mounted/embedded sensor data. Sensor data can become unavailable at times (e.g., “occlusions”). This could negatively impact the ability of service providers to create maps or provide other services. The data that it stores must be accurate, complete, current, and up-to-date in order to make the infrastructure reliable.

“Various embodiments claim that the infrastructure-based systems’ sensor accuracy is enhanced by information from clients (e.g. vehicles) who are being served. Different from current V2X solutions that require constant signaling between infrastructure equipment and user equipment, in various embodiments clients only transmit information to the infrastructure equipment. The embodiments reduce the communication overhead between clients and infrastructure equipment. Embodiments also include multicast and broadcast communication by infrastructure equipment, which helps to minimize signaling overheads and wireless spectrum congestion.

“In the disclosed embodiments, infrastructure equipment, such as a roadside unit, is communicatively coupled to a sensor array. The sensor array may include one or more sensors that are mounted on infrastructure equipment, and/or one or several fixed sensors located at various locations within a defined coverage area. The sensor array is used by the infrastructure equipment to collect sensor data representative of objects within the coverage area. The infrastructure equipment can also identify regions that are not covered adequately by the sensor array (e.g.?sensor coverage gaps?). or?occlusions ), by identifying gaps in the currently available sensor data (e.g.?perception gaps?). The infrastructure equipment tracks objects, such as vehicles, within the coverage area. The infrastructure equipment will request sensor data from the object’s onboard sensors if it finds an object in a perceived gap area (or is about to enter one). This sensor data is collected by the infrastructure equipment, which then uses it to supplement the existing knowledge about the infrastructure (i.e., filling the perception gaps). Other embodiments can be described and/or claimed.

“I. VEHICLE-TO-EVERYTHING EMBODIMENTS”

“Now, let’s move to FIG. 1. This is an example environment 60, in which different embodiments of this disclosure can be used. Environment 60 is a system that includes sensors, compute units, as well as wireless communication technology. The infrastructure equipment 61a,61b is communicatively connected to the sensor arrays 62a,62b, respectively. Each sensor array 62a,62b includes one or more sensors that are positioned along a section of the physical coverage area 63. A?sector is a section of the physical coverage area that is covered by a single sensor. Sensor arrays 62a,62b detect one or several objects 64a,64b. These objects 64a,64b travel within the respective sections of the physical coverage area of 63. Wireless communication technology may be used to connect the objects 64 a and 64 b with the infrastructure equipment (61 a, 6 b) and each other. The sensor array (62a) includes one or two sensors that provide object identification information to the infrastructure apparatus 61a. While the sensor array (62b) includes one or several sensors that provide object recognition information to the equipment 61b (e.g. via radar, ultrasonic or camera), the sensor array (62) b includes one, more, or all of the following sensors: Infrastructure equipment 61a, 61b may also exchange information regarding the vehicles 64a, 64b they are tracking, and may support collaborative decision-making.

“In this example, objects 64a, 64b are vehicles (referred as?vehicles64a, 64b?) That they are traveling on a road within the coverage area 63 (referred as?road63?). For illustrative purposes, the following description is provided for deployment scenarios including vehicles in a two dimensional (2D) freeway/highway/roadway environment wherein the vehicles are automobiles. The embodiments described herein can be applied to other types vehicles such as trucks, buses, motorboats and motorcycles. They also work with other motorized devices that are capable of transporting people and goods. The embodiments described herein can be applied to three-dimensional (3D), deployment scenarios in which some or all of these vehicles are used as flying objects. This includes drones, aircraft, unmanned aerial vehicle (UAVs) and/or other similar motorized devices.

“The 64 a, 64b vehicles may be any type or motorized vehicle used for transporting people or goods. Each of these vehicles is equipped with an engine and transmission, wheels, axles, wheels, control systems for driving, parking, passenger comfort, safety, and so on. Motor?, motorized?, and others are interchangeable terms. As used herein, the terms?motor?,??motorized?, etc. refer to devices that convert one type of energy into mechanical energy. These include internal combustion engines, compression combustion engines, and electric motors. FIG. One may be used to represent motor vehicles with different makes, models, trims, etc. The wireless communication technology used by 64 a and 64 b may also include V2X communication technology. This allows 64 a and 64 b to communicate with each other as well as with infrastructure equipment 61a, 61b. Third Generation Partnership Project (3GPP), cellular V2X, (C-V2X), technology. This technology allows the vehicles 64 a, 64 b to communicate directly with each other and with infrastructure equipment 61 a, 61 b. The positioning circuitry is used to (coarsely determine) their geolocations and communicate with the infrastructure equipment 61a, 61b in a secure, reliable way. This allows vehicles 64a, 64b to sync with the infrastructure 61a, 61b in a secure and reliable manner.

“The infrastructure equipment 61/21 a, 61/21 b can provide environmental sensing service, and in this case, the infrastructure equipment 61/21 a, 61/21 b could provide environmental sensing service for vehicles 64. Infrastructure equipment 61a, 61b can provide environmental sensing services that may be used to map dynamic environments in real time, such as road 64. Real-time mapping of dynamic environment is used to make high-reliability decisions, such as when 64 vehicles are CA/AD 64 vehicles 64. Intelligent Transport Systems (ITS) may use the real-time mapping to create a local dynamic map (LDM), which structures all data necessary for vehicle operation. It also gives information about highly dynamic objects such as 64 vehicles on the road 63. LDM input can be provided by either user equipment (UEs), equipped with sensors such as vehicles 64 or fixed sensor arrays 62a,62b, which are located along the road 63. No matter what source the sensor data comes from, the environment model created using sensor data must be as accurate and complete as possible to provide reliable real-time mapping services.

The current methods for providing real-time mapping services are based primarily on a complex set sensors in each of the UEs, as well as a non-deterministic group of V2X protocol protocols to enhance understanding of the area of interest. For semi-autonomous or autonomous driving, environmental sensing is achieved by combining various types of sensor data, including radar, light detector and ranging (LiDAR), and visual (e.g. image and/or videos). Differential GNSS is used to improve localization based upon GNSS systems. Correction data from fixed stations with known geopositions can also be provided. These data fusion methods are complex and require large storage resources and high power consumption.

“Some service providers (or developers of applications) rely only on vehicle sensing capabilities to provide real-time mapping services. Real-time mapping that relies on only in-vehicle sensors or computing systems can add significant weight, cost and energy to each 64. A single vehicle 64 may have a limited view of 63, which is in contrast to environmental sensing systems that use infrastructure equipment (61 a, and 61 b), discussed infra. Real-time mapping is not always available for autonomous or semi-autonomous driving applications.

Some mapping service providers try to combine the sensing capabilities from multiple vehicles 64. This allows the vehicles 64 to exchange data in-vehicle. V2X technology, for example, provides lower-level network protocols to allow direct communication between vehicles 64 (e.g. DSRC links, sidelink communications over C-V2X systems’ PC5 interface) and infrastructure 61 a., 61 b, without specifying higher level application logic. Multicast or broadcast protocols are also available in cellular communication systems (e.g. Evolved Multimedia Broadcast and Multicast Service, eMBMS)) to allow one-to-many communication. But, broadcast/multicast protocols and V2X do not have acknowledgement mechanisms. This means that it is impossible to guarantee the timeliness or completeness of the messages received. Real-time mapping services that rely on V2X or broadcast/multicast technologies to share sensor data among 64 vehicles cannot meet the accuracy and completeness requirements of most autonomous and semi-autonomous driving applications.

The disadvantages of the ETSI Intelligent Transport Systems (ITS), technology are similar to those of V2X and other broadcast/multicast technologies. ITS is a system that supports the transportation of goods and people with information and communications technologies. It’s used to safely and efficiently use transport infrastructure and transport methods (e.g. cars, trains and planes, as well as ships and other transport vehicles). ITS infrastructure supports traffic-related events via Decentralized Environmental Notification Messages and Cooperative Awareness Messages. CAMs are messages that are exchanged within the ITS network among ITS stations (ITSSs), to create and maintain mutual awareness and support cooperative vehicle performance 64. Information about road hazards or unusual traffic conditions is contained in DENMs. This includes information such as the type and location of the road danger and/or any abnormal conditions. ITS also contains a Local Dynamic Map, which is a data storage located within an ITS?S that contains information relevant to the operation and safety of ITS applications. LDM is a repository for information about facilities (e.g. Cooperative Awareness (CA), and Decentralized Environmental Notifications (DEN) services) as well as applications that need information on moving objects (e.g. vehicles near or stationary, such as traffic signs). High frequency data/information regarding the location, speed and direction of each vehicle 64 is included in both DEN and CA services. The ETSI ITS works with best efforts and cannot guarantee that all messages will be received on time. Not all 64 vehicles are equipped with ITS-based V2X communication technology in order to send these messages. It is impossible to guarantee the completeness and timeliness in the receipt of CAMs/DENMs because they are sent from 64 vehicles of different makes and models. This means that the source, time, and location information of CAMs/DENMs can be ambiguous. ITS currently does not have a coordinating authority to ensure the accuracy, reliability, and timeliness of information. Real-time mapping services that rely on ITS technologies to share sensor data among vehicles 64 are not able to meet the requirements of most autonomous and semi-autonomous driving applications.

Some service providers use only infrastructure equipment 61a,61b and fixed sensor arrays62a,62b to provide real-time mapping services. Infrastructure-only systems that provide real-time mapping services can’t meet the accuracy and completeness requirements of most autonomous or semi-autonomous driver applications. This is because infrastructure-only solutions are susceptible to occlusions in their sensed environment, such as objects placed in line of sight of one sensor in a sensor array 62a, 62b. This is especially true given the practical limitations in the deployment of individual sensing element at the area of concern.

“Accordingly to different embodiments, real-time mapping services can be provided by infrastructure equipment 61a,61b, which monitors objects 64a,64b using individual sensors within the sensor arrays 662a, 662b. Each infrastructure equipment 61a, 61b includes a map processing subsystem (e.g. map processing subsystem 309 of FIG. 3.), which uses sensor data to determine position, speed, and other properties of moving objects 64a, 64b within the coverage area. It also generates a dynamic map of the coverage region 63 in real time. The infrastructure equipment 61a,61b is communicatively connected to sensor arrays 62a,62b. Sensor array 62a, 62b can detect objects 64a, 64b within the coverage area of 63. Map processing subsystem, e.g. map processing subsystem 309 (FIG. 3), includes an object detector (e.g. object detector 305, FIG. 3) to perform various logic operations in order to detect objects 64 within the coverage area 63, based on sensor data. The map processing system (e.g. map processing subsystem 309 in FIG. The data fuser (e.g. data fuser 352 in FIG. 3) can perform various logical operations to fuse the collected sensor data together. Any suitable technique can be used to fuse the sensor data (e.g., Kalman filters, Gaussian Mixture Models, etc.). Time synchronization may also be used to fuse sensor data using information about the location, speed and size 64 of each object as identified by an object detector (e.g. object detector 305 in FIG. 3). The map processing system (e.g. map processing subsystem 309 in FIG. The map processing subsystem (e.g. map processing subsystem 309 of FIG. 3) can perform various logic operations to generate an overall map covering the coverage area 63. The overall map of the coverage zone 63 can be generated using any technology that is suitable. Data about moving objects 64 can be extracted and combined into one map that includes all moving objects 64 within the coverage area 63. This map will include all objects 64 that are detected by sensors communicatively coupled with the infrastructure equipment 61. This may be represented by an overall map of the area 63. In some embodiments, an object detector (e.g. object detector 305 in FIG. The relative movement of objects 64 and sensors of sensor array 662 may be used to detect sensor blind spots. This may be due to the changing viewing angles of objects 64, as they pass by stationary sensors 64. Some embodiments combine different types of sensors, sensor positions and sensing directions to provide as much coverage as practical. The coverage area 63 may include stationary sensors that can detect moving objects 64. This will ensure that there are no blind spots under normal traffic conditions. This is possible due to most constraints for vehicle-mounted sensors (e.g. weight constraints, space constraints and power consumption constraints, etc.). These constraints do not apply to sensors within stationary sensor arrays (62) located at or near the coverage area 63. These proactive measures may not be enough to eliminate occlusions from the coverage area 63. For example, objects placed in the line-of-sight of sensors in a sensor array (62).

“In some embodiments, the infrastructure equipment (61 a, 64 b) also includes wireless communication circuitry (not illustrated in FIG. 1), which is used for obtaining sensor data from individual objects 64a, 64b and providing real-time mapping data 64a, 64b. In particular, properties of objects 64 a, 64 b under observation are made available as a concise and time-synced map, which the objects 64 a, 64 b can use to aid their trajectory planning or for other applications/services.”

“Accordingly to different embodiments, the map processing circuitry detects gaps within the coverage area 63 (referred as “perception gaps?”). Based on the sensor data. The map processing circuitry also analyzes acknowledgements from selected objects 64a, 64b within the coverage area 63. The map processing circuitry can augment and verify sensor data from fixed sensors of the sensor arrays, 62a, 62b by asking for position data and sensor information from selected moving objects 64a, 64b within the observed area 63. Before sending requests for sensor or position data, the object 64 a and 64 b are identified by tracking 64 a and 64 b using the sensors of the sensor clusters 62a, 62b. The responses from 64 a, 64b can be mapped to the geolocation within the coverage area of 63. This allows infrastructure equipment 61a, 61b to request sensor data or position information from localized objects 64a, 64b. This helps to reduce spectrum crowding while keeping overhead signaling to a minimum. These and other aspects of embodiments described in the present disclosure are further described in the following.

Referring to FIG. 2 illustrates an overview of an environment 200 that can be used to incorporate and use the sensor network technology described in the present disclosure. The illustrated embodiments include a number of vehicles 64 (including the vehicles 64 a and 64 b in FIG. 1), infrastructure equipment (61a, 61b), MEC host 257 and access node 258, network 258, and one to more servers 260.

“The environment 200 could be considered a wireless sensor network (WSN), in which the entities within the environment 200 might be considered?network nosdes? or?nodes. They communicate in multi-hop fashion among themselves. The term “hop” may refer to a single node or intermediary device through which data packets traverse a path between a source and destination device. The term “hop” may refer to a single node or intermediary device that transmits data packets along a path from a source device to a destination device. Intermediate nodes (i.e., nodes that are located between a source device and a destination device along a path) forward packets to a next node in the path, and in some cases, may modify or repackage the packet contents so that data from a source node can be combined/aggregated/compressed on the way to its final destination. FIG. 2. The architecture of environment 200 is a decentralized V2X network that includes 64 vehicles with one or more network interfaces. Infrastructure equipment 61a and 61b act as roadside units (RSUs) in FIG. As used herein, the terms ?vehicle-to-everything? V2X and vehicle-to-everything are interchangeable terms. may refer to any communication involving a vehicle as a source or destination of a message, and may also encompass or be equivalent to vehicle-to-vehicle communications (V2V), vehicle-to-infrastructure communications (V2I), vehicle-to-network communications (V2N), vehicle-to-pedestrian communications (V2P), enhanced V2X communications (eV2X), or the like. This V2X application can make use of “cooperative awareness”. to provide more intelligent services for end-users. The vehicles 64, radio access nosdes, pedestrian UEs, and other devices may gather information about their environment (e.g. from nearby sensors or vehicles) and process that data to create and share more intelligent services such as autonomous driving and cooperative collision warning. Similar to the ITS services, V2X cooperative consciousness mechanisms are similar.

FIG. 2. may be identical or similar to vehicles 64 a, 64b previously discussed and may be collectively called a?vehicle64? Or?vehicles 64. One or more of the 64 vehicles may contain vehicular user equipment (vUE system) 201, one, or more sensors 220 and one, or more driving control units (220). A computing device or system mounted on, embedded, embedded, or otherwise integrated into a vehicle 64 is called the vUE system. The vUE system 201 includes a number of user or client subsystems or applications, such as an in-vehicle infotainment (IVI) system, an in-car entertainment (ICE) devices, an Instrument Cluster (IC), a head-up display (HUD) system, onboard diagnostic (OBD) systems, dashtop mobile equipment (DME), mobile data terminals (MDTs), a navigation subsystem/application, a vehicle status subsystem/application, and/or the like. The term “user equipment” is used. Alternatively,?user equipment? or?UE? The term “user equipment” is also used. Alternatively, the term?user equipment? (or?UE?) may refer to any type of wireless/wireless device or computing device. Any type of wireless/wireless device, or any computing device that includes a communication interface such as the communication tech 250, may be included. Furthermore, the vUE technology 201 and/or communication technology 250 can be called a “vehicle IT-S” when ITS technology is being used. Or simply as an “ITS-S.”

The DCUs 220 are hardware elements that control subsystems of vehicles 64. These elements include electronic/engine controls units (ECUs), electronic/engine management modules (ECMs), embedded system, microcontrollers and control modules, as well as engine management systems (EMS). Sensors 220 can provide sensor data to DCUs 220- and/or other vehicle systems to enable DCUs 220- and/or one/more other vehicle subsystems 64 to control the respective systems. The sensors 220 can sense magnetic, thermal, infrared and/or radar signals, as well as other similar sensing abilities.

“The communication technology 250 can connect, for instance, communicatively couple the vehicles 64 with one of several access networks (ANs), or radio access network (RANs). (R)ANs may include one or more (R]AN nodes such as infrastructure equipment 61a, 61b and RAN node (256) shown in FIG. 2, which allow connections to corresponding networks. The terms “access node”,? and “access point” are used herein. ?access point,? The term “access point” or something similar may refer to network elements or equipment that provide radio baseband functions, wire-based functions for data or voice connectivity between a network with one or more users. The term “network element” is used herein. The term?network element? may be used interchangeably with or referred to as: What is the term “network element?” The term “network element” may refer to a physical computing device that is part of a wired and wireless communication network. It can also be used to describe virtual machines. The AN nodes are also known as base stations (BS), next generation NodeBs(gNBs), RAN Nodes, NodeBs and NodeBs. They can also be called NodeBs, Road Side Units and Transmission Reception Points. The ANs can be used to perform various radio network controller functions (RNC), such as radio bearer management. They also manage radio resources, data packet scheduling, mobility management, uplink/downlink dynamic radio resource management, and data packet scheduling. FIG. 2 shows an example implementation of the ANs. S2.”

FIG. 2. The infrastructure equipment 61 a and 61 b are Roadside ITSSs or roadside units, respectively, while the (R)AN node 256 represents a cellular basestation. The term “Road Side Unit” is used. RSU is the abbreviation for Road Side Unit. Any transportation infrastructure entity that is implemented in an eNB/eNB/TRP/RAN or stationary (or relatively stationary?) UE. The term?Roadside ITS Station’? Refers to an ITS subsystem within the context of roadside ITS equipment. Infrastructure equipment 61a, 61b can be found at roadside and provide transport-based services as well as network connectivity services to passing vehicles 64. Each infrastructure equipment 61a, 61b includes a computing system communicatively connected with individual sensors 262 via an interface circuitry or communication circuitry. In ITS-based embodiments the interface circuitry or communication circuitry of infrastructure equipment 61 a and 61 b can be a road equipment gateway. This is a gateway to specific road side equipment (e.g. sensor arrays 62 a. 62 b traffic lights, barriers, electronic signage, etc.). The infrastructure equipment 61a, 61b can obtain sensor data as well as other data (e.g. traffic regulation data, electronic sign data, etc.). These embodiments may use a well-known communication standard to communicate between the infrastructure equipment 61 a and 61 b as well as the road side equipment (e.g., DIASER or similar). The infrastructure equipment 61 a, 61 b may also include internal data storage circuitry to store coverage area 63 map geometry and related data, traffic statistics, media, as well as applications/software to sense and control on-going vehicular and pedestrian traffic.”

“The interface circuitry connects the infrastructure equipment 61a,61b with individual sensors 262 within sensor arrays 62a,62b. Each sensor 262 covers a specific area of the physical coverage. Individual sensors 262 can include different sensing capabilities such as image, video, radar, LiDAR and ambient light; sound; and others. Consecutive infrastructure equipment 61a, 61b may be deployed so that the sectors of the physical coverage 63 overlap in certain embodiments. This may enable a continuous and substantially complete map to be generated of the coverage area. Interface circuitry collects sensor data from individual sensors 262. This is representative of the sectors covered by individual sensors 262 as well as objects 64 moving within the sectors. The coverage area 63 is used for monitoring/tracking activity. It is limited by the sensing range and observable objects, such as buildings and roads. Other objects, such as roads and geographical features, may limit or even prevent the objects from moving. Sensor data can indicate or represent, among other things, the location, direction, speed, and velocity of objects 64. The RSE 61 computing system uses the sensor data to provide real-time mapping services. This may include computing or generating a map of the coverage area 63, including representations 64 of dynamic objects and their movements. Individual objects 64 may receive the dynamic map or the data used to generate it.

“In some embodiments, computing system 61a, 61b logically divides observation area 63 into individual sectors or two-dimensional (2D) cells. 2D cells can be used if the observation area is 63 is a 2D area or one plane (e.g. a roadway), while 3D cubes can be used if the coverage area 63 contains multiple planes (e.g. overlapping highway intersections and bridges). In some embodiments, each grid cells has the same dimensions and is defined by absolute geolocation coordinates. The computing system of infrastructure equipment 61a, 61b calculates a grid based environment model which is overlayed on top of the observed coverage. Grid-based environment models allow the computing system of infrastructure equipment 61a, 61b to target specific objects 64 in particular grid cells in order to request data from those objects 64.

“Real-time mapping services in embodiments detect obstructions in the observed/sensed environment (e.g. coverage area 63) and request sensor data from selected vehicles 64. These embodiments assign an ID (unique identifier) to each object 64 by the infrastructure equipment 61a, 61b during a handshake (see, e.g. FIG. X2). The unique ID that was assigned to the infrastructure equipment during the initial handshake procedure is used (see FIG. To identify any object 64 at any time, use X2. If object 64 is temporarily occluded, the infrastructure equipment 61a, 61b can perform the handshake procedure. Infrastructure equipment 61a, 61b can request sensor information from specific objects 64 by knowing the unique ID, direction, speed, and location of each object 64.

“The infrastructure equipment 61’s communication circuitry may use the 5.9 GHz DSRC frequency to provide low latency communications for high-speed events such as traffic warnings and crash avoidance. The communication circuitry of infrastructure equipment 61 can also provide a WiFi hotspot in the 2.4 GHz band and/or connectivity to one or several cellular networks for uplink and downstream communications. The computing system, along with some or all of its communication circuitry, may be enclosed in a weatherproof enclosure that can be used outdoors. It may also include a network controller to provide a wired connection (e.g. Ethernet) to a traffic signal control controller and/or backhaul network. The communication circuitry in the infrastructure equipment 61 can be used to broadcast V2X messages 64 to other objects 64, such as pedestrians and other UEs (not illustrated by FIG. 2). Broadcasting can be made possible by a suitable broadcast/multicast mechanism, such as the evolved multimedia broadcast multicast services for LTE (eMBMS). These embodiments may also include access to multiple functionalities, such as a local gateway or V2X application server for LTE (V2XAS), a broadcast multicast center (BMSC) and a multimedia multicast multicast gateway (MBMS GW). In some cases, the infrastructure equipment 61 can also include a traffic offloading function (TOF). This allows the LGW, V2X?AS, BM?SC, MBMS?GW and/or other computational tasks to be transferred to a local MEC host 257.

“The illustrative embodiment of the RAN node (256) is a cellular station. The RAN Node 256 could be a next-generation (NG) RAN Node that operates within an NR or fiveG system (e.g. a next-generation NodeB(gNB), an Evolved UMTS Terrestrial Radio Access Network [E-UTRAN]), or a legacy RAN like a UMTS Terrestrial Radio Access Network or GERAN, a WiMAX RAN Node, or another cellular base station. The RAN node256 can be used as one or more dedicated physical devices, such as a macrocell-based base station or a low power (LP), base station to provide femtocells and picocells. Other embodiments of the RAN node (256) may be implemented using one or more software units running on server computers in a virtual network. This may be called a cloudRAN (CRAN), virtual baseband unit (BB), cloud-based pool (BB), and/or similar. Other embodiments may use the RAN node to represent individual gNB distribution units (DUs), which are connected via an F1 interface (not illustrated).

“Still referring FIG. 2. The Multi-access Edge Computing host 257 (also known as a?Mobile Edge Computing Hosting?) is the Multi-access Edge Computing, (MEC), host. The MEC host, also known as a?Mobile Edge Computing Host?, is a virtual or physical computing system that hosts multiple MEC applications and provides MEC service to them. MEC allows content providers and application developers to access cloud computing capabilities and an IT service environment at the edge. MEC is a network architecture that allows cloud computing capabilities to be used and computing services to take place at the edge. MEC allows applications to run and perform related processing tasks near network subscribers. Also known as “edge users”, MEC is a mechanism that allows these subscribers to access the MEC services. The like. This will allow for less network congestion and may result in better application performance.

“Edge servers are physical devices that operate or implement the MEC host 257, where it is implemented as one or several virtual machines (VMs), or the like. Edge servers can include or be virtualization infrastructure that provides virtualized computing environments. MEC host 257. MEC applications can be run as VMs over the virtualized infrastructure. FIG. FIG. 2 shows that the MEC host 257 and the RAN 256 are co-located. This implementation can be referred as a small-cell-cloud (SCC), when the RAN 256 serves as a small cell base station (e.g. pico-cell or femtocell). or a WiFi AP. Or, a mobile microcloud (MCC), when the RAN 256 serves as a macro-cell basestation (e.g. an eNB or gNB). MEC host 257 can be deployed in many other arrangements than those shown in FIG. 2. The MEC host 257 could be operated or co-located by an RNC. This may be applicable to legacy network deployments such as 3G networks. The MEC host 257 can be deployed at cell aggregate sites or multi-RAT point aggregation points, which can either be found within an enterprise or in public coverage areas. The MEC host 257 can be deployed at an edge of a cell core network in a third scenario. These implementations can be used in follow me clouds (FMC), which allows cloud services running at distributed information centers to follow the CA/AD 64 as they roam across the network.

MEC can be used in V2X situations for advanced driving assistance applications. This includes real-time situational awareness and see-through sensor sharing. High definition local mapping, including the dynamic real time mapping services discussed herein. MEC host 257 hosts MEC apps running various workloads such as Machine Learning, Augmented Reality, Virtual Reality, Artificial Intelligence, and Data Analytics. Privacy enforcement is also available for data streams destined for the cloud. MEC applications can share data either directly or through an MEC V2X interface (API).

According to different embodiments, MEC host 257 can be used for real time mapping application computation offloading. The MEC host 257 executes computationally-intensive tasks while the vUE systems 201 of 64 perform less computationally intensive functions. The communication technology 25 0 may transmit traffic data and sensor data from the 64 vehicles to the MEC hosts 257. In some embodiments, MEC host 257 may then aggregate these data and distribute them to 64 vehicles via RAN node 2256 and infrastructure equipment 61a, 61b. MEC host 257 is able to offload compute-intensive tasks, as it has greater/greater performance capabilities than the vUE systems 201 of vehicles 64. MEC can also be used to offload computationally-hungry applications, portions thereof, intermediate data processing apps, or portions thereof. It may also be used for offloading data processing applications, or portions thereof. Applications that are computational-hungry have high data processing and large data transfer requirements. Examples include graphics/video processing/rendering, browsers, artificial/augmented reality, cloud-based gaming apps, three-dimensional (3D), gaming, and other applications. Applications with intermediate data processing requirements have large data processing and/or high data transfer requirements, but are less stringent that computation-hungry ones. These include, for instance, sensor data cleansing (e.g. pre-processing and normalization), video analysis and value-added service (e.g. translation, log analytics and the like). Moderate data processing apps have lower data processing requirements and/or data transfers than intermediate data processing programs, such as antivirus software. Examples of compute-intensive tasks in real-time mapping services include tasks for sensor data collection, sensor data fusion and map generation.

A new instance of an app is created at the MEC host 257 to perform computation offloading. This happens in response to one or more requests 64. The MEC host 257 may be selected by a MEC system (e.g., included in the server(s) 260) to start the instance of the application based on a set of requirements (e.g., latency, processing resources, storage resources, network resources, location, network capability, security conditions/capabilities, etc.) The application must meet these requirements. The user(s) requests are fulfilled by the establishment of connectivity between 64 vehicles and the instance at the MEC host 257 using the communication technology 250. When all users have disconnected from the particular instance of the application, the application instance will be terminated.”

“Still referring back to FIG. 2. The network 258 includes computers, network connections between them, and software routines that enable communication between them over network connections. In this regard, the network 258 comprises one or more network elements that may include one or more processors, communications systems (e.g., including network interface controllers, one or more transmitters/receivers connected to one or more antennas, etc. Computer-readable media. These network elements include wireless access points (WAPs), home/business server (with or without radio frequency communications circuitry), routers and switches, hubs radio beacons base stations, picocell base stations and/or other similar devices. The connection to network 258 can be made via either a wired or wireless connection, using any of the communication protocols described in the following. As used herein, a wired or wireless communication protocol may refer to a set of standardized rules or instructions implemented by a communication device/system to communicate with other devices, including instructions for packetizing/depacketizing data, modulating/demodulating signals, implementation of protocols stacks, and the like. In a communication session between illustrated devices, more than one network can be involved. The computers may need to execute routines that enable the connection to network 258. These routines could include the OSI model of computer networking, or the equivalent in a wireless (cellular phone network) network. The network 258 can be used for relatively long-range communication, such as between one or more servers (260) and one or several vehicles 64. The network 258 could represent the Internet, one, more cellular networks or large area networks. It may also include proprietary or enterprise networks, transfer protocol (TCP), Internet Protocol (IP),-based networks, and combinations thereof. The network 258 could be associated with a network operator, who controls or owns equipment necessary for providing network-related services. This includes one or more base stations, access points, one, or more servers to route digital data or phone calls (for instance, a backbone or core network).

“Still referring back to FIG. 2. The one or more server(s), 260 include one or several physical and/or virtualized system for providing functionality or services to one or many clients (e.g. vehicles 64) over an internet (e.g. network 258). The server(s), 260 could include various computer devices that have a rack computing architecture, tower computing architecture, blade computing architecture and/or similar. The server(s), 260 could be a cluster of servers or a server farm. The server(s), 260 could also be connected to or associated with one or several data storage devices (not illustrated). The server(s), 260 could also include an operating system (OS). This provides instructions for general administration and operation of individual server computers. It may also include instructions stored on a computer-readable medium that can store instructions that allow servers to perform their intended functions when executed by the processors. People with ordinary skill in art can easily implement suitable implementations of the OS and general functionality for servers.

“Generally, server(s), 260 offer services or applications that use IP/network resources. The server(s), 260 might provide traffic management services and cloud analytics, content streaming services and immersive gaming experiences. They may also offer microblogging and/or social networking services. The server(s), 260 can also provide services such as initiating and controlling software or firmware updates for individual components and applications 64. The server(s), 260 may also be configured for support of communication services, such as Voice-over Internet Protocol (VoIP), PTT sessions and group communication sessions, for vehicles 64 over the network 258. The server(s), 260 could be configured to act as a central ITSS. This provides centralized ITS applications. These embodiments may have the central ITSS playing the roles of traffic operator, road operator and/or services provider. The central ITS-S might also need to be connected with other backend systems through a network such as network 258. Specific instances of the central ITS-S might contain groups of applications or facilities entities to meet deployment and performance requirements.

Referring to FIG. 3 illustrates a component view, according to different embodiments, of infrastructure equipment 61 that includes a Real-Time Mapping Service 300. The RTMS 300 can be included in infrastructure equipment 61 a or 61 b, depending on the embodiment. (hereinafter?infrastructure gear 61?). RAN node 262, or any other suitable device or system. Other embodiments allow for some or all aspects of the RTMS 300 to be hosted by a cloud computing provider, which can interact with equipment 61, deployed RMTS appliances, or gateways. The RTMS 300 consists of main system controller 302, object detection 305, messaging subsystem 306, handshake subsystem 306, messaging system 307, map processing subsystem 309, mapping database (DB), 320 and object DB 330. Map processing subsystem 309 contains a map segmenter 346 and a data fuser 352, as well as a map generator 382. Infrastructure equipment 61 also includes a sensor subsystem 310 and inter-object communication subsystem 312. Remote communication subsystem 314 are also included. Other embodiments of the RTMS 300 or infrastructure equipment 61 could include additional subsystems than those shown in FIG. 3.”

“The main controller 302 is designed to manage the RTMS 300. This includes scheduling tasks to execute, managing memory/storage resources allocations, routing inputs/outputs between entities and the like. The main system controller 302 can schedule tasks using a suitable scheduling algorithm and/or implement a suitable message pass scheme to allocate resources. The main system controller 302 can operate an operating system (OS), to allocate computing, memory/storage, networking/signaling resources. The main system controller 302 may be configured to allow intra-subsystem communication among the various subsystems within the RTMS 300 by using appropriate drivers, libraries and application programming interfaces (APIs), middlewares, software connectors, glue, and the like. The main system controller 302 is also configured to control communication of application layer (or facilities layer) information with objects 64, such as sending/receiving requests/instructions and data (e.g., ACKs, position information, and sensor data), including functionality for encoding/decoding such messages.”

“Continuing with FIG. 3. The object detector 305 is designed to detect, monitor and track object(s), 64 within a coverage area of 63. Based on the received sensor data, the object detector 305 can detect, track, and monitor the object(s), 64. The object detector305 can receive sensor data from sensors 262 using the sensor-interface subsystem 312. In some embodiments, the object detector 305 may also receive sensor data stored by infrastructure equipment 361 via remote communication subsystem 314. The object detector 305 can also be configured to receive sensor data from observed objects 64 using inter-object communication system 312. As mentioned previously, the definition of coverage area 63 can vary from one embodiment to the next. It is dependent on the application and limited by the sensors 262, as well as the sensor data limitations. The object detector 305 tracks 64 objects and determines vector information, such as travel direction, speed, velocity, acceleration, etc. Information about the objects 64. The object detector 305 can use one or more of the following object tracking and/or computer visual techniques to track them 64: a Kalman filter. A deep learning object detection technique is available (e.g., fully-convolutional neural network, region proposal convolution neural net (R-CNN), single-shot multibox detector,?you just look once?). (YOLO) algorithm, etc. ), and/or similar. Some of these techniques use identifiers (also known as “inherent IDs”). or similar) to track objects 64 detected in video or other sensor data. These inherent IDs may be stored in object DB 330 by the object detector 305 in these embodiments. These inherent IDs can be linked with unique IDs that are assigned to detected objects 64, as discussed infra.

The object detector 305 can use additional mechanisms to aid in the detection and monitoring 64 of the objects. The object detector 305 could detect and track objects 64 by using known received signal strength indicator (RSSI), calculations of one or more signals from the observed objects 64, triangulation and/or dead reckoning. Another example is that the object detector 305 could use additional information to detect and track objects 64 such as path loss measurements and signal delay time, signal-to-noise ratio, signal-to-noise plus interference ratio, and directional signaling measurements.

“Continuing with FIG. 3. The sensor interface subsystem 310 communicates with the infrastructure equipment 61, the RTMS 300 and the sensor array 62 and facilitates communication between sensors 262 in the sensor array 62 and actuators 322. Sensor interface subsystem 310 can receive data from sensors 262 or actuators 322, and send commands to sensors 262 or actuators 322 to control the sensors 262/322. Examples of commands sent to sensors 262 or actuators 322 include commands to calibrate the sensors and actuators, and to transmit commands to them 262 or actuators 322. These commands can be used to operate/control the sensors 262 or actuators 322. In some embodiments, the sensor interface subsystem 310 can be configured to allow inter-device communication according to one or more industry standards. These include cellular, WiFi and Ethernet short-range communications, personal area networks (PAN), Controller Area Networks (CAN) or any other suitable standard or combination thereof. FIG. FIG. 3 shows the sensor array 62, which includes the sensors 262 as well as the actuators 322. The sensor interface subsystem 310 includes various electrical/electronic elements to interconnect the infrastructure equipment 61 with the sensors 262 and actuators 322 in the sensor array 62, such as controllers, cables/wires, plugs and/or receptacles, etc. The sensor interface subsystem (310) may also include wireless communication circuitry that wirelessly communicates with the sensors 262 or actuators 322 of the sensor array 62. The sensor interface subsystem 310 in ITS implementations may be a roadside ITSS gateway or a roadway data gateway. This gateway connects components of roadside systems, such as sensors 262 and actuators 322 in the sensor array 62.

“The sensor interface subsystem 311 and main system controller 322 are devices that measure or detect state changes and/or motions within the coverage area 63 and provide sensor data representing the detected/measured differences to the object detector. The sensors 262 may include one or more motion-capture devices. These devices are designed to detect a change of position 64 relative to its surroundings. They also provide sensor data representative of detected/measured changes to the object detector 305 via the sensor interface subsystem 310 and the main system controller 302. The detection of motion, or change in motion, as well as speed and direction, may be made by reflecting visible light (or opacity), sound, ultraviolet light, sound and microwaves, as well as near-IR and near-IR waves and/or other suitable electromagnetic energy. Depending on the sensor 262, there may be electronic elements such as radar, LiDAR and visible or ultraviolet light cameras, thermographic cameras (IR), etc. For example, there are transmitters and waveguides as well as duplexers and receivers (e.g. radar signal receiver, photodetectors or the like), scanners, beam splitters, signal processing or DSPs, MEMS devices, scanners, beamsplitters, signal processors/DSPs, energy sources (e.g. illumination sources, laser projectors or IR projectors), etc. Antenna arrays that include individual antenna elements and/or similar elements. You may also use other types of sensors 262 in other ways.

“Actionators 322 are devices responsible for controlling and moving a system or mechanism. The actuators 322 can be used in various ways to alter the operational state, such as on/off, zoom, focus, etc. The sensors can be moved, oriented and/or placed in a variety of ways. The actuators 322 can be used in some cases to modify the operation of other roadside equipment such as traffic lights, gates, and digital signage. The actuators 322 can receive control signals from RTMS 300 through the sensor interface subsystem 310 and convert that energy into an electrical or mechanical motion. Control signals can be low-energy electric voltages or currents. The actuators 322 may be electromechanical relays or solid state relays that are used to control electronic devices and/or motors.

“Continuing with FIG. The handshake subsystem 306 allows for a handshake with detected objects 64 using inter-object communication subsystem 312, and may also include the use of one or more communication protocols, as described in greater detail below. Each detected object 64 is assigned a unique system-wide ID. The detected objects 64 then inform infrastructure equipment 61 of the objects 64’s capabilities in wireless communication, self-localization and environment sensor readings. An object 64 may receive multiple handshakes as it travels within the coverage area 63. A handshake may be initiated between object 64 and infrastructure equipment 61 once the object 64 passes through the sensors 262. This could occur when an object 64 enters an intersection. Handshake subsystem 306 can repeat the handshake procedure with selected objects 64, provided that these objects pass a predetermined number sensor arrays 62a,62b and/or infrastructure equipment (61a,61b) to calibrate the sensors 26.2 If the object 64 is temporarily occluded, and later detected by the infrastructure equipment (61a, 61b), the handshake procedure may be repeated with 64.

“In different embodiments, the infrastructure equipment 61 learns the capabilities of tracked objects 64. This allows for only the appropriate objects 64 to request sensor data and position data. These capabilities can include, but not be limited to, geo-positioning capabilities that indicate whether there are any types of positioning system implemented by objects 64, wireless communication abilities that indicate the type of communication circuitry used by objects 64, sensing capabilities that indicate the types and ranges of the sensors and precision of those sensors. The infrastructure equipment 61a, 61b can broadcast or multicast map information by combining location/position information from 64 objects with the tracking of 64 objects using sensor data from individual sensors 262. This helps to minimize latency in wireless data exchange.

“As we have already mentioned, each object 64 is assigned a unique ID by the handshake subsystem 306. This unique ID is used for broadcasting or multicast data (e.g. RTMS data), to the objects 64. Each object 64 is assigned a unique ID by the handshake subsystem 306. It uses, for example, Universally Unique Identifiers (UUID), algorithms, or any other suitable mechanism. It does not need to be unique globally. Instead, the unique ID may be unique locally (i.e. within the coverage area 63). Locally unique IDs may be reused after an object 64 leaves the coverage region 63. The ID does not need to be unique among objects 64 within the coverage area. One example implementation allows for 16 bit IDs to allow 65536 unique values. Privacy concerns can be alleviated by randomly changing 64 unique IDs during handshakes. The object DB 330 is saved in the object DB 306 after the unique ID has been assigned to an object 64. The object detector 305 also continuously tracks an object 64 with the help of sensors 262 via sensor interface subsystem310 and provides updated position information to the map processing system 309.

“The handshake subsystem 306 is also responsible for managing the storage of unique IDs. The object detector 305 employs an object tracking algorithm such as a Kalman filter or Gaussian Mixture Model to detect objects 64 in video or other sensor data. These embodiments allow the object detector to 305 to store the unique IDs assigned by the tracking algorithm along with the records 331 (also known as ID records 331?). Or something similar. The object DB 330 also stores object capabilities acquired during the handshake process in records 332 (also known as?capabilities record 332?). Or the like) and stores object data (e.g. velocity/speed, position/direction, size, etc.) After the handshake procedure, the object 64 records 333 contain the data. The object DB 330 also stores 334 records that indicate message types and the message content to be returned by 64 objects. This allows the main controller 302 to send information requests 64 to the objects 64 by using single bit flags or triggers. The object DB 330 and records 334 store relations between unique IDs as well as message types, provided that the objects 64 are within the communication range of infrastructure equipment 61 a and 61 b.

“In certain embodiments, the message content and/or type of message may be represented in an object list. This object list contains one record for each detected object 64 and its attributes. An object list can allow messages to specific objects 64 to be embedded in the normal payload, since the recipients of information are the same object as the data object within the message. Table 1 infra shows an example object list.

“Continuing with FIG. 3. The map processing subsystem 309 contains a map segmenter 346 and a data fuser 352. A map generator 386 is also included. Data fuser 352 is technology that combines sensor data from sensors 262 or sensors mounted on/on objects 64. The data fuser 352 may contain technology for sensor detection and/or data gathering. It may also combine/process data to prepare data for map generation. The map generator 386 uses technology to create an environmental map 324 covering the coverage area 63 using the combined sensor data from data fuser 352, as well as control storage of the map 324 within the mapping DB 320. The map segmenter 346 uses technology to split the environmental map 324 generated by the map generator 386 into multiple map segments 325. The map segmenter 346 can be configured to annotate two or more map segments 325, with information 64 for each object, to create individualized environmental maps. The map segmenter 346 might assign a unique identifier for each of the two or three map segments 325 that correspond to a particular location on the environmental map 324. The map segmenter 346 can be further configured to group the one or several objects 64 into the two, or more, map segments 325. This is based on the respective locations of each of the one or two objects 64 and the locations of the two or three segments in the environment map 324. The mapping DB 320 could correspond to an LDM repository in ITS implementations.

“Some embodiments can provide an infrastructure-aided fog/edge dynamic mapping for autonomous driving or manufacturing (e.g. automated warehouses). Some embodiments, for example, may offer a platform that can serve dynamic maps to individual drivers in CA/AD and AV vehicles 64. Autonomous can be used to refer to fully autonomous or partially autonomous. High-reliability decision-making systems may require real-time mapping in a dynamic environment. For example, assisted/autonomous driving may require that in-vehicle processing is not sufficient to create a complete and accurate real-time object detection and tracking map of the surrounding environment. Some embodiments provide infrastructure (e.g. a roadside system), to enhance in-vehicle process for better map generation and object tracking.

“Some embodiments may allow for unique labeling of objects 64 by infrastructural sensor 262, map segment tag and/or remote updates. This is combined with a low overhead handshake protocol that the handshake subsystem 306 facilitates between the infrastructure equipment 6 and the objects 64. Certain embodiments may offer an enhanced or optimal portion and detail of each object 64’s high resolution map. This is to ensure full coverage without adding additional processing load. Relevant performance indicators, such as in the context CA/AD vehicles 64, may include precision, timeliness and adequate coverage (e.g. the entire width of the road, or production line). These performance indicators may not be optimized or improved. For example, an CA/AD 64 might use sensor data onboard and attempt to integrate this data into an environment model that is based on high resolution maps. Some other systems may not be available onboard. This could be due to limitations in sensor technology, road bends, infrastructure and/or weather conditions. Aerodynamics and other design constraints may limit the physical space available for mounting sensors. Each vehicle may be heavier, more expensive and consumes more energy if it has additional sensors or compute power. One embodiment may enhance the ability to generate an individual map for a CA/AD vehicle 64 using broadcast data from a collaboration infrastructure. The collaborative infrastructure may be built on a high-resolution map, but may also use fixed sensors to track the road. All vehicles may have access to a global consistent environment model. It is possible to shift more compute power to infrastructure equipment 61 (or MEC host 257) with roadside sensors 262 in order reduce the need for complex sensors and/or compute capabilities in the CA/AD 64.

“Continuing with FIG. The inter-object communication system 312 is designed to facilitate communication between observed objects 64. Inter-object communication subsystem 312, which receives data from 64 observed objects, broadcasts or multicasts messages 64 to observed objects 64 in order to perform handshakes with them and/or request data. Inter-object communication subsystem 312 supports communication between infrastructure equipment 61, observed objects 64, in accordance to one or more industry standard(s), such as cellular specifications provided under the Third Generation Partnership Project (3GPP), New Radio (NR) or Long Term Evolution standards, a Local Area Network ( WiFi standard specified by a suitable IEEE 802.11, a short-range communication standards such as Bluetooth/Bluetooth Low Energy, ZigBee or Z-Wave, or any other suitable standard or combination thereof, With the help of inter-object communications subsystem 312, the object detector 305 or other subsystems are further configured to scan and determine whether the 64 observed objects support a specific inter-device communication standard. The scan can be performed during a listen before talk (LBT), to identify an unoccupied channel. C-V2X implementations could have the scan/discovery include, for instance, asking V2X (or proSe) capabilities or permissions directly from objects 64, or from a V2X control function (or ProSe function) located within a core network. With the help of inter-object communications subsystem 312, main system controller 302 and object detector 305, other subsystems may be set up to authenticate the 64 observed objects. This will confirm that all 64 objects have the appropriate communication and autonomic capabilities. After authentication of other objects 64, main system controller 302 can control inter-object communication subsystem 312, to exchange authentication information. This may include identification and/or security data. This information can be exchanged securely in some embodiments according to a mutually supported communication protocol. The authentication information could be encrypted before being sent to the objects 64 or similar.

“Accordingly to different embodiments, the messaging system 307 with the support of the inter-object communications subsystem 312, broadcasts/multicasts messages to request information from objects 64. This may be called?sendinformation requests?. These embodiments use the messaging subsystem 307. It is designed to encode and broadcast/multicast messages and decode messages received from individual objects 64. In some embodiments, the messaging subsystem 307 generates an object list indicating all observed objects 64 in the coverage area 63, which is then broadcasted/multicast to all observed objects 64 in the coverage area 63. The object list is sent to the recipients on a regular basis, at a time that suits their navigation needs. Each object 64 contains a set or data elements (DEs), which are necessary for reliable navigation decisions. These include an assigned unique ID, position (e.g. GNSS geolocation), direction and speed, vehicle size, vehicle type, and/or the like. The object list’s set of attributes includes send information request attributes and DEs in various embodiments. Each recipient object 64 is included in embodiments as a data record that forms a map of all moving objects 64 within the coverage area 63. The send information requests 64 for individual objects can be embedded into existing attributes 64. The send information requests can be Boolean attributes, such as a send ACK attribute that instructs the object 64 with an acknowledgement message, a Send Sensor attribute that instructs the object 64 with its own sensor data, and a Send Position attribute to instruct 64 with its own position data (e.g. GNSS or GPS coordinates). The object list is sent to 64 observed objects. The 64 objects may then search the object list for the corresponding records using the unique ID assigned to them 64 during the initial handshake process. These embodiments may save computational resources because the attributes and/or DEs of the object list, including send information requests, could be identical for each object 64. Additionally, using broadcast/multicast technologies may allow the infrastructure equipment 61 to reduce communication/signaling overhead. Any suitable markup language or schema language may be used to create the object list. In some embodiments, the object lists contain documents or data structures that can be interpreted using the subsystems in RTMS 300. These include XML (or any variant thereof), JSON (or any variation thereof), IFTTT, (If This Then That). A JSON example object list in human-readable format is shown by table 1. This table includes an example of data sizes for each field in pseudo comments. Table 1 shows an example object list in JSON human-readable format. It also includes pseudo comments that show data sizes for each field.

“Table 1 contains a?Map Section?. This section indicates the individual map segments 325 that have been segmented using the map segmenter 346. The overall environmental map 324 in this example is rectangular. If the map 324 has curved roads, the rectangle will be sufficient to accommodate the curves. The overall environmental map 324 is divided into segments 325 of equal sizes. These segments are then equally cut in such a manner that the sequence 325 follows the main driving directions for the map segment 325. You can also use other representations of the map segments 325 in different ways, such as using different sized and shaped map segments for complex interchanges or in cities.

“The table 1 example also contains a single object section?” In the?object list The message’s first section contains a list of attributes that correspond to each object 64. This single object section contains, inter alia: an ID DE, timestamp DE and a?SentAck?? DE, a SentSensor? DE, a?SentSensor? DE. DE. The timestamp DE contains a timestamp which may be equalized by or synchronized with the infrastructure equipment 61 to ensure that each object 64 doesn’t need its own timestamp. The?SentAck is a one-bit DE. The?SentAck?DE is a one-bit DE that, when set at?TRUE?, sends back an ACK message to the object 64. or a value greater than?1? requests that the object 64 send back an ACK message. The?SentSensor is The?SentSensor? is a one-bit DE that when set to TRUE, will send additional information about the object 64. or a value greater than?1?, asks object 64 for additional information about its sensor values and/or captured data. The?SentPosition The?SentPosition? DE is a single bit DE. When set to TRUE, it will be returned. Requests object 64 to send back its position data, such as using on-board positioning circuitry or other methods, if set to?TRUE? The object 64 with the unique ID DE is identified in the object list. It searches the message for its unique ID and checks the status of any send information request attributes. If any send information request attributes have been set to true, it generates and sends the reply to infrastructure equipment 61. The agent stops transmitting relevant data to infrastructure equipment 61 if any of the send request attributes is set to false. The messaging subsystem 307 calculates the time for the relevant send-information requests in advance. It corresponds to a particular geographic location where object 64 will be found.

“The?single-object section? “The?single object section? This example message also contains a type DE, position DF and velocity DF. These DEs/DFs can be used by objects 64 to place the subject object 64 on their locally generated maps. This example uses type DE to indicate a vehicle type. It can also be used in the lookup table for vehicle sizes. This example assumes that vehicle sizes can be divided into 256 types of vehicle, which results in 8 bits of data. Trailers, however, are considered separate objects 64.

“The Cartesian (X, Y), coordinate DF is located within the boundary 325 of the map segment (325). (e.g. the?points? DF and SegmentSize DF in table 1. Other embodiments allow for the inclusion of GNSS latitude and longitude values in the position DF. The transmitted map segment 325 has been divided into polygons. Therefore, the position values can also be given in relative form according to the size and location of the bounding segments. This allows for a significant reduction in data size. Cartesian coordinates can be used to reduce the message size and signaling overhead. The maximum number of Cartesian coordinates that must be stored is 100000, for a maximum 325 map segment size of 1 km (kilometer). The Cartesian coordinates may be stored in two 17-bit DEs. One DE for each X axis and another DE for each Y axis in Cartesian coordinate systems. The sigma DE, which represents a variance in latitude and longitude values could be stored in the same size DF. In some embodiments, the same sigma may be used for both x and y axis. This results in 51 bits per position, as opposed to 192 bits in conventional message.

“Direction is determined by velocity DFs (e.g. the?velocity_north?) DF and the velocity_east? In the example in table 1, DF is used. This example codes the speed and direction together with two scalars: one for the X-axis and one to the Y-axis. These numbers can be used to calculate direction. The vector addition of these two values, v=, is possible. The speed is indicated by the square root of (x2+y2), while the direction is indicated by the right arrow (V )=(Vx).} Using”

“100 ? “100? The granularity of ”

“0.1 ? ? M snis was used to store a maximum value up to 1000 in both x- and y directions. This value can be stored in 10 bits DE, which results in 20 bits for speed and direction.

“Remote communication subsystem 314, as mentioned earlier, is designed to facilitate communication between one or more remote servers 360 or other infrastructure equipment 361. Remote servers 360 can be identical or similar to the server(s), 260 in FIG. 2. It may include one or more servers that are associated with a mobile network operator platform, service provider platform, cloud computing, traffic management service and energy management service. ), a law enforcement agency or government agency, an environment data service, etc. Other infrastructure equipment 361 could be identical or similar to infrastructure equipment 61 (with associated sensors arrays 62, and so forth). It is deployed in a different geographical location from the infrastructure equipment 61. The remote communication subsystem 314 supports communication between infrastructure equipment 61, servers 341 and other infrastructure equipment 361 according to one or more industry standards. For example, WiMAX standards or 3GPP NR specifications. A LAN, WLAN, or WLAN standard such WiFi specified by an IEEE 802.3 standard. Or Ethernet specified by an IEEE 802.3 standard.

“Accordingly to different embodiments, subsystems of infrastructure system 61 or RTMS 300 fill in perception gaps in real-time dynamic maps created by the map processing subsystem 309 and updated by handshake subsystem 307. The handshake subsystem 305 performs a handshake procedure (e.g. handshake procedure 500 in FIG. 5) with objects 64 within the coverage area. The object detector305 continuously tracks all moving objects 64 within the coverage area. The object detector305 and/or map processing subsystem 309 select individual objects 64 to receive sensor data to augment the real-time dynamic maps. The messaging subsystem 307 generates/encodes request messages that can be broadcast or multicast to selected objects 64. The messaging subsystem307 and/or map processing subsystem 309 process the response messages from selected objects 64.

“Various embodiments of the RTMS 300 identify subsections or areas of the coverage area 63. The object detector 305 and/or an additional subsystem of RTMS 300 select individual objects 64 from a database of 64 objects being tracked (e.g. which is stored in object DB 330). These embodiments enable the infrastructure equipment 61 improve environmental perception by filling or supplementing perception gaps. For example, augmenting dynamic maps with sensor readings of the tracked objects 64. The infrastructure equipment 61 can also verify that messages are received by the objects using multicast or broadcast protocols. This is a feature lacking in many current communication protocols. 64 objects are selected for send information requests by the map processing subsystem 309. This selects one or more areas or sections of the coverage area (e.g. one or two logical grid cells from the environmental model) that require additional information. Depending on whether you are filling in gaps or receiving acknowledgement messages, the selection of sections or regions can be different. The map processing subsystem 309 uses the known mechanisms to detect obstructions to the fixed sensors 262 and other reasons that reduce environmental map 324’s completeness. It then selects the sections or regions (e.g. grid cells) that correspond with the occluded areas. For verifying the reception of broadcasted/multicast messages, the main system controller 302, object detector 305, or some other subsystem of the RTMS 300 selects one or more test sections or regions (e.g., grid cells) at random, using a round robin scheduling mechanism, based on a user/operator selection, or based on the observed traffic situation. The simulation results can be used to determine traffic density or other conditions that could affect wireless communication coverage.

After selecting the sections, regions, or grid cells, the object detector305 consults the 64-tracked objects 64 to create a list 64 of objects that are likely (or predicted) to) pass through the section within a time period. The object detector 305 then generates a second listing of 64 objects that possess the technical capabilities necessary to provide the requested information. The object detector 305 calculates a third list based on the speed of each object 64 from the second listing and any known communication delay. These timestamps are used to indicate when an object 64 should receive a send request. As long as the object 64 on the second list is within the designated section or region, the send information request can be repeated.

According to different embodiments, the RTMS 300 could use data in response messages from tracked objects 64 in multiple ways. For example, when response messages contain sensor data, or when they include receipt ACKs. The RTMS 300 can establish trust in the object sensor data by combining the sensor data with fixed sensor 262 data to create a new result set. Multiple agents may be able to use their responses to establish a temporal and spatial overlap between the section or region being sensed. This will increase trust in the information from objects 64. Incorrect information from objects 64 may result in them being excluded. The overlap check results are then merged with observations based upon sensor data from sensors 262. A trust mechanism that allows for only the 64-occupied areas of sensor data received may be used. If an object 64 is detected in the coverage area 63 using sensor data from sensors 262, sensor data received from object 64 is not used to determine the object’s position. Data fusion is performed on an additional object that was not detected with the infrastructure equipment 61. You can increase the total number of objects 64 by getting replies from objects 64, but not reduce it. The tracked objects 64 receive a new result in the form of a map of all dynamic items.

If the reply messages include receipt ACKs or if such ACK messages are not present, infrastructure equipment 61 can initiate measures to increase reception in the observed coverage area 63. This could include changing or adjusting network configuration parameters, including antenna parameters (e.g. antenna tilt, azimuth), etc. The downlink transmit power, the on/off state of the infrastructure equipment (61), handover-related parameters and/or other parameters may be affected. One example of network configuration adjustment is a coverage optimization (CCO), function that changes the signal strength of a cell or interfering cell nearby by changing antenna tilt or power settings to improve radio link quality. You may also need to consider parameters that are part of the same infrastructure equipment (61), or different equipment 61, or network elements that have an impact on each other. Another example of network configuration adjustments is activating multipath communication and beam steering, repetition messages, or combination of various communication technologies. The network configuration parameters for one wireless communication cell could also include or involve the parameters of neighboring cells. TX power, antenna azimuth, and tilt are all examples of association. Additional details regarding coverage issues and mitigation options can vary from one embodiment to the next, depending on which wireless technology is being used. Infrastructure equipment 61 can request multiple receipts ACKs from multiple objects 64 before it adjusts network parameters. There may be multiple reasons why an ACK is not received. Multiple objects 64 may be asked to send multiple receipt ACKs in some instances. Infrastructure equipment 61 could then alert another subsystem (such the inter-object communications subsystem 312, remote communication subsystem 314, and the RAN node 262 of FIG. 2) regarding issues with wireless coverage.

“In ITS-based implementations some or all the components depicted in FIG. “In ITS-based implementations, some or all of the components depicted in FIG. 3 could follow the ITS communication protocol (ITSC), which is based upon the OSI model. Layered communication protocols are extended to ITS applications. The ITSC includes, among other things, an access layer that corresponds to OSI layers 1 through 2, a networking & transportation (N&T), layer that corresponds to OSI layers 3 through 4, a facilities layer which corresponds to OSI layers 5, 6 and at least some functionality from OSI layer 7, and an application layer. Each layer is interconnected using respective interfaces, service acces points (SAPs), or APIs. These implementations may include RTMS 300 or part of the facilities layer. Additionally, aspects of the sensor subsystem 310 and inter-object communication system 312, as well as the remote communication subsystem 314 could be part the N&T or access layers.

“The facilities layer includes middleware, software connectors and software glue. Multiple facilities can be included. The facilities layer includes functionality from the OSI application, the OSI presentation layers (e.g. ASN.1 encryption and decoding and encryption), and the OSI session layers (e.g. inter-host communication). A facility is a component which provides functions, information and/or services to applications in the application layer. It also exchanges data with lower levels for communication with other ITSSs. Table 2 lists the common facilities, while table 3 lists the domain facilities.

“TABLE 2\nCommon Facilities\nClassification Facility name Description\nManagement Traffic class Manage assignment of traffic class\nmanagement value for the higher layer messages\nID Manage ITS-S identifiers used by\nmanagement the application and the facilities\nlayer.\nApplication Manage the AID used by the\nID (AID) application and the facilities layer.\nmanagement\nSecurity Deal with the data exchanged\naccess between the application and\nfacilities layer with the security\nentity.\nApplication HMI support Support the data exchanges\nSupport between the applications and\nHuman Machine Interface\n(HMI) devices.\nTime service Provide time information and time\nsynchronization service within the\nITS-S. This may include providing/\nobtaining the actual time and time\nstamping of data.\nApplication/ Manage and monitor the functioning\nfacilities of active applications and facilities\nstatus within the ITS-S and the\nmanagement configuration.\nSAM Support the service management of\nprocessing the management layer for the\ntransmission and receiving of the\nservice announcement message\n(SAM).\nInformation Station Manage the ITS-S type and\nSupport type/ capabilities information.\ncapabilities\npositioning Calculate the real time ITS-S\nservice position and provides the information\nto the facilities and applications\nlayers. The ITS-S position may be\ngeographical position (longitude,\nlatitude, altitude) of the ITS-S.\nLocation Calculate the location referencing\nreferencing information and provide the location\nreferencing data to the applications/\nfacilities layer.\nCommon data Data dictionary for messages.\ndictionary\nData Message encoding/decoding support\npresentation according to formal language being\nused (e.g., ASN.1); supports the basic\nfunctionality of the OSI presentation\nlayer.\nCommunication Addressing Select addressing mode for messages\nSupport mode transmission\nCongestion Facilities layer decentralized\ncontrol congestion control functionalities.”

“TABLE 3\nDomain Facilities\nClassification Facility name Description\nApplication DEN basic Support the protocol processing\nSupport service of the Decentralized Environmental\nNotification Message\nCA basic Support the protocol processing\nservice of the Cooperative Awareness\nMessage\nEFCD Aggregation of CAM/DENM data at\nthe road side ITS-S and provide\nto the central ITS-S\nBilling and Provide service access to billing\npayment and payment service provider\nSPAT basic Support the protocol processing\nservice of the Signal Phase and Timing\n(SPAT) Message\nTOPO basic Support the protocol processing\nservice of the Road Topology (TOPO)\nMessage\nIVS basic Support the protocol processing\nservice of the In Vehicle Signage (IVS)\nMessage\nCommunity Manage the user information of a\nservice user service community\nmanagement\nInformation Local Local Dynamic Map database and\nSupport dynamic map management of the database\nRSU Manage the RSUs from the central\nmanagement ITS-S and communication between\nand the central ITS-S and road side\ncommunication ITS.\nMap service Provide map matching functionality\nCommunication Session Support session establishment,\nSupport support maintenance and closure\nWeb service High layer protocol for web\nsupport connection, SOA application\nprotocol support\nMessaging Manage ITS services messages based\nsupport on message priority and client\nservices/use case requirements\nE2E Deal with the disseminating of\nGeocasting information to ITS vehicular and\npersonal ITS stations based on\ntheir presence in a specified\nGeographical area”

“In an example ITS implementation, messaging subsystem 307 or inter-object communication system 312 may provide DEN basic service(DEN-BS) and/or CA basic service(CA-BS), respectively. The mapping DB 320 could provide the LDM facility, while the map processing subsystem309 may be an ITS app residing in the application layers. The map processing subsystem 309 may be classified as a road safety and/or traffic efficiency application in this example. Aspects of the handshake subsystem 306 and/or object DB 330 could provide information about the station type/capabilities for this example ITS implementation.

The CA-BS contains the following entities for sending and receiving CAMs: an encode CAM entity and a decode CAM enterprise, a CAM transmit management entity, and a CAM receipt management entity. The DEN-BS is used to send and receive DENMs. It includes an encode DENM entity and a decode DENM entity. A DENM transmission management entity and a DENM receipt management entity. The protocol operation of the originating ITSS includes activation and termination, determination of CAM/DENM generation frequency and triggering generation CAMs/DENMs. The protocol operation of the receiving IT-S is implemented by the CAM/DENM reception manager entity. This includes activating the decode CAM/DENM entity at reception of CAMs/DENMs and provisioning received data to the LDM or facilities of the receiving I-S. Also, discarding invalid/unvalid CAMs/DENMs and verifying the information of the received CAMs/DENMs. The DENM KAF entity KAF stores a receipt DENM for its validity and forwards it when applicable. The usage conditions of the DENM KAF can be either defined by ITS application specifications or cross-layer functionality of an ITSC managing entity. The encode CAM/DENM entity creates (encodes), CAMs/DENMs to contain various, and the object list may include a listing of DEs and/or frames (DFs) included within the ITS data dictionary according to ETSI technical specification TS 102 894-2 version 1.3.1 (2018) August. This article is titled?Intelligent Transport System ITS Users and Application Requirements; Part 2: Applications layer common data dictionary.

“In embodiments the encode CAM/DENM entity creates (encodes), CAMs/DENMs that include different data such as the object list. The CAM/DENM encode entity could generate a CAM/DENM that includes a plurality records. Each record corresponds to an object 64 from the 64 objects detected by the sensors 262. Each record may contain one or more data elements, and one or more frames. A data type that only contains one data element (or a datum), while a DF contains data types that contain more than one DE within a predefined sequence. DEs and/or DFSs can be used to build facilities layer or applications layer messages, such as CAMs, DENMs and the like. The plurality of DEs may include an ACK DE and a position request DE. The CAM/DENM encode entity inserts the first value into the sensor DE of the at minimum one object 64. This value indicates that at least one object 64 is to be reported sensor data captured locally by that object 64. Or?1? ); inserts a second value in the sensor request DE for records for other objects. The second value indicates that other objects have not reported the second sensor data. Or?0 ); inserts a third number into the position request DE for records of one or several detected objects that are to be reported a current location (e.g.,?TRUE). Or?1? ?1? 0? or FALSE? The decode CAM/DENM entity receives CAMs/DENMs. According to different embodiments, decode CAM/DENM entity in a roadside ITS?S decodes CAMs/DENMs starting at 64. This allows for the ACKs, position data and/or sensor data to be obtained from the 64 objects. These embodiments use sensor data from CAMs/DENMs to represent a physical area that corresponds to an occlusion.

“As we have already mentioned, the station type/capabilities facilities may be used to access aspects of object DB 330 or the handshake unit 306. The ITS station type/capabilities area provides information that describes a profile for an ITS-S. This information can be used in applications and facilities layers. This profile indicates the type of ITS?S (e.g. vehicle ITS?S, roadside ITS?S, personal or central ITS?S), and a role for the ITS?S (e.g. operation status of an emergency vehicle or other prioritized vehicle status, status of dangerous goods transporting vehicles, etc. ), and the status of detection capabilities (e.g., ITS-S’s position capabilities, sensing abilities, etc.). The object DB 330 contains the object capabilities and station type information for this ITS implementation. It also stores the unique ID that was assigned to each object 64 during the handshake procedure.

“Aspects of the sensor interface subsystem (310), the inter-object communications subsystem 312, or the remote communication subsystem 314 could be included in the N&T and accessibility layers, as mentioned earlier. The N&T layer is responsible for the functionality of the OSI transport layer layer and OSI network layer. It includes one or two transport protocols and network and transport layer management. These protocols could include the Internet Protocol Version 4 (IPv4) and Internet Protocol Version 6 (IPv6). They also include the GeoNetworking protocol. This protocol includes IPv6 networking with mobility, IPv6 over GeoNetworking as well as the CALM FAST protocol. The IPv6 network protocol includes methods that allow interoperability to legacy IPv4 systems. UDP/TCP, one, two, or more dedicated ITSC transport protocol protocols, as well as any other appropriate transport protocol, could be included. Each networking protocol may be linked to a specific transport protocol. Access layers include a physical layer (PHY), connecting physically to the communications medium, and a data layer (DLL), which can be sub-divided into a medium control sub-layer, (MAC), managing access to the communication media, and a logical control sub-layer, (LLC), management adapt entity (MAE), to manage the DLL and PHY, and a security adapt entity (SAE), to provide security services to the access layer. External communication interfaces (CIs), and internal CIs may be included in the access layer. The CIs are instantiations a specific access technology or protocol, such as ITS G5, DSRC WiFi, GPRS 5G, UMTS, WiFi, GPRS 5G, Ethernet, Bluetooth or any other protocol mentioned herein. The functionality of one or more of the logical channels (LCHs) is provided by the CIs. The standard for the access technology used to map LCHs onto physical channels determines the mapping.

Referring to FIG. “Referring back to FIG. Hardware implementations can include circuitry or companion silicon, such as CPLDs, FPGAs and PLAs. They may also include programmable SoCs that are programmed with operational logic. Fixed-functionality hardware hardware using circuit technology like ASIC, CMOS or TTL technology. Implementations of software may include components as shown in FIG. 3. These components are autonomous software agents or artificial intelligence (AI), agents that have been developed using suitable programming languages, development tools/environments, and are executed by one to several processors or individual accelerators. They are configured with the appropriate bit stream(s), logic blocks, and are able to perform their respective functions. Software implementations can also include instructions in instruction set architectures (ISA), which are supported by target processors. Or any of a variety of high-level programming languages that can be compiled to instruction of the ISA. Hardware implementations can include either software or hardware. In particular, if controller 302 or one of its subsystems (RTMS 300) includes at least one trained neural network for performing their respective assessments and/or determinations, at least a portion (e.g., an FPGA with an appropriate bitstream) of main system controller 302. The (trained neural networks) may include a multilayer feedforward neural net (FNN), Convolution Neural Networks (CNN), Recurrent Neural Networks (RNN), or any other suitable neural network. The FIG. will provide a more detailed description of an example hardware computing platform for the infrastructure equipment 61. 7.”

“FIGS. The following examples show processes 400-600 in accordance to various embodiments. The operations of 400-600 are described, for illustration purposes, as being performed by different subsystems of infrastructure equipment 61 or the RTMS 300. 3. FIGS. 4-6 illustrate specific examples and order of operations. FIGS. 4-6 illustrate some examples and orders of operations. However, these depicted operations should not be taken to limit the scope or meaning of the embodiments. The operations shown may be rearranged, broken down into additional operations, combined and/or omitted entirely, while still remaining within the scope and spirit of the disclosure.

“FIG. “FIG. Based on sensor data from individual sensors 262 and sensor data associated with objects 64, the map generation process 400 can be used to create real-time dynamic maps of objects 64. When occlusions are detected, Process 400 can be used to fill in the perception gaps and augment dynamic maps. This is a manual process. Process 400 starts at operation 402, where infrastructure equipment 61 (or the map processing subsystem 309) calculates or determines a gridbased environment model of coverage area 63, which is then overlaid on the observed coverage area area 63 (e.g. a road). Grid cells can be either 2D or 3-D. Each grid cell can be the same size, and may be defined using absolute GNSS coordinates. Opera 402 may include operation 424, where infrastructure equipment 61 or map processing subsystem 309 determines the map grid boundaries for the environmental model. Opera 426 is where infrastructure equipment 61, or map processing subsystem 309, stores the grid boundaries and the defined environmental model in the mapping DB 320.

“At operation.404, infrastructure equipment 61 (or the handshake subsystem 306) performs an intial handshake procedure (e.g. handshake procedure 500 of FIG. 5) and objects 64, as objects 64 enter the communication area 63. Operation 404 may include operation 428 in which the infrastructure equipment (or handshake system 306) assigns an object ID 64 to the objects, and operation 430 in which the infrastructure equipment (or handshake system 306) requests technical capabilities for the moving objects 64. These technical capabilities may be obtained and stored in object DB 330 (not illustrated in FIG. 4.) to determine whether or not a specific object 64 is able to participate in real-time mapping services, and/or fill in perception gaps.

“At operation 406, infrastructure equipment 61 (or object detection 305) continuously tracks and records the positions of detected objects 64, using sensor data from fixed infrastructure sensors 262. The sensors data are used to determine the position, speed, direction (heading), or other properties of moving objects 64. The map processing subsystem 309 makes the properties of 64 objects under observation available as a concise and timely-synced mapping 324. This map is transmitted back to 64 tracked objects to aid navigation, CA/AD and/or other applications 64 have implemented. The objects 64 can include a way to at least coarsely identify themselves and communicate their positions in a protocol for secure initial identification. This could include, for instance, DGNSS positioning systems and GNSS systems, LTE location service, or any other electronic component that can determine the location or position 64.

“At operation 408 the infrastructure equipment (or the object detector 305) detects 64 objects that are traveling towards (or predicted to travel at) a region within the coverage area 63. Additional data may be needed for this area or will be in the near future. Opera 408 may include operation 432 in which the infrastructure equipment (or object detector 305) detects grid cells with obstructions and/or operation 434, where the infrastructure apparatus 61 (or object detector 305) selects grid cell for verification. These embodiments include operation 432 and 434 where grid cells have been selected. The infrastructure equipment 61 or the object detector 305 then selects or picks one or more objects 64 that are entering (or predicted to enter), the selected grid cells. Operation 410 is when the infrastructure equipment (or object detector 305) determines if all cells are covered in the environmental model. If not, the infrastructure machinery 61 (or object detector 305) loops back and performs operation 408.

If the infrastructure equipment (or object detector 305) determines all cells are covered then the infrastructure equipment (or map processing subsystem 309) performs operation 412, to augment the environmental data to be broadcast or multicast to objects 64. Operation 412 may include operation 438 in which the infrastructure equipment (or map processing system 309) iterates through a list 64 of tracked objects (e.g. ID records 331 in FIG. (3) for the unique IDs assigned the objects 64 in operation 408. Opera 440 may include operation 438 in which the infrastructure equipment (or messaging subsystem 307) adds flags to the selected objects 64. The messaging subsystem 307 can augment the payload data for real-time dynamic maps messages 64 for identified moving objects with flags or des to indicate send information requests to the selected objects 64 to transmit a receipt ACK, sensor and position data. To determine or achieve statistical confidence communication cell coverage, several objects 64 may be asked for receipt ACKs. To fill in perceived gaps in the overall 324 map generated by the map processing system 309, sensor data and position data will be requested. These flags and DEs can be used in future broadcast/multicast packages, or PDUs. The infrastructure equipment 61 will need feedback information from selected objects 64. The infrastructure equipment 61 or messaging subsystem 309 can switch object 64 responses on/off for specific grid cells. Operation 414 is when the infrastructure equipment (or messaging subsystem 307) broadcasts messages, including real-time dynamic segment data, to all 64 tracked objects, including those 64 that were selected for the send information request.

“At operation 416 the infrastructure equipment (or the inter object communication subsystem 312) continuously monitors the selected objects 64 for any response messages. The reply messages can be sent separately over a point-to-point connection by using suitable wireless/V2X communication technology. This can include multicast or broadcast transmissions. Some embodiments allow other objects 64 to listen for multicast or broadcast response messages and then compare these data with their own observations. They can also annotate their mapping data based upon the comparison.

“At operation 418 the infrastructure equipment (or main system controllers 202 and/or map processing subsystem 309) processes data received from selected objects 64. A request for an ACK reply message is included in the send information request. This could indicate poor wireless coverage, such as a cell coverage hole or network overload. The map processing subsystem 309 checks for consistency when the send information request contains a request to obtain additional sensor data. It then fuses the received sensor and sensor data from the infrastructure sensors 262. Trust mechanisms can be used in some cases to verify or determine the reliability of the received sensor data. For example, “sanity checks” could be used. This can be done by comparing sensor data from several objects 64 located at or near cells selected at operation 434, and sensor data taken by sensors 262 of the same cells. Another example is out-of-band authentication and safety measures such as safety islands, TXE, or TPM elements within the objects 64 may be used.

“FIG. “FIG. The handshake procedure 500 could correspond to operation 404, 428 and 430 in process 400, as shown by FIG. 4. The handshake procedure 500 between object 64, infrastructure equipment 61, facilitates object annotation. Objects 64 that enter a coverage area 63 are assigned a unique ID. This allows them to inform infrastructure equipment 61 about their object capabilities in wireless communication, self-localization and environment sensing. An example of this is the initial handshake procedure 500 between object 64 and infrastructure equipment 61. This may occur when object 64 passes by sensors 262 for a first time (e.g. when entering an intersection or onto a highway 63). If object 64 is temporarily occluded, the handshake procedure 500 may be repeated later. The sensors 262 of infrastructure equipment 61 will detect the object again.

“Process 500 starts at operation 502, where the infrastructure equipment (or the object detector 305) detects the object 64 and identifies it or makes it known to the infrastructure equipment 61. The infrastructure equipment 61 or the handshake subsystem 306 generates an object ID 64 at operation 504. The unique ID can be used again if the object 64 is removed from the coverage area 63. Unique IDs may not be required to be unique within the scope of infrastructure equipment 61, or in a portion thereof. The infrastructure equipment 61 may provide an optional service to improve the object 64’s positioning information (e.g., using differential global navigation satellite systems (DGNSS), at operation 506). Operation 508 is when the object 64 initiates a procedure (?init(?))). The object 64 initiates an operation (?init( )?) with the infrastructure apparatus 61. Using a suitable V2X communication technique 250, the object 64 establishes connection to the infrastructure equipment 61.1 and sends a request package announcing its intention to use maps data. Operation 510: The infrastructure equipment 61 (or subsystem 306) provides time sync (?TimeSync?) message to the object 64. The infrastructure equipment 61 (or the handshake subsystem 306) provides a time synchronization (?TimeSync?) This allows object 64 to sync its local time with that of infrastructure equipment 61. Operation 512: The object 64 transmits its current position, timestamp and time back to the infrastructure equipment (or the handshake system 306).

“At operation 514, the infrastructure equipment 61 (or handshake subsystem 306) sends an inquiry for the technical capabilities of the tracked object 64. At operation 516, object 64 sends a message indicating or including its technical capabilities. A list of message types, along with details about the content of each message sent back to object 64, can be used to define technical capabilities. This allows infrastructure equipment 64, using simple triggers such as single-bit flags, to send information requests back to object 64. As long as the object 64 is within the communication range of infrastructure equipment 61, the infrastructure keeps the relationships between the object’s unique ID 64 and the message types. The infrastructure equipment 61 (or subsystem 306) transmits the unique ID to the object 64 in order to complete the handshake procedure 500 at operation 518. After the handshake procedure 500, stationary sensor 262 is used to track the object 64. They also update their position continuously while it travels within the coverage area of 63.

“In some embodiments, handshake procedure 500 can be repeated repeatedly or after a set number of infrastructure nodes have been passed to calibrate tracking capabilities by the infrastructure sensor 305 and/or object detector 305. Privacy concerns can be addressed in some embodiments by randomly changing the unique IDs between handshakes. The environmental map 324 may be updated using the combination of the location information from object 64 and consecutive object tracking using sensor data 262. This is possible following the handshake procedure 500. The object 64 may provide sensor data to update the environmental map 324. Some embodiments allow for consecutive transmission of map data 324, which may be done as one-way communication from the infrastructure equipment 64 to the objects 64. This will reduce or minimize latency in wireless information exchange. Some embodiments may use universally unique identifiers (UUIDs) to assign unique IDs. This could include suitable random number generators, hash function, and the like. Some embodiments can manage unique IDs using object tracking techniques, such as the ones discussed above. Some object tracking methods may use IDs for tracking objects in video or other sensor data. One embodiment may use a buffer or table to store the IDs of the object being tracked and the unique IDs broadcast in the map broadcast.

“FIG. “FIG. 6” shows an object selection procedure 600 to select objects 64 for sending information requests. This is in accordance with different embodiments. The object selection process 600 can be used in embodiments to identify subsections within the coverage area 63 that require additional information. You can also use the object selection process 600 to choose individual objects 64 from a 64-object list. The infrastructure equipment 61 can use the process 600 to enhance environmental perception. For example, it augments dynamic maps by incorporating sensor readings from tracked objects 64. The process 600 also allows the infrastructure equipment to verify that messages are being received by the objects using multicast or broadcast protocols.

Referring to FIG. “Referring to FIG. The reason for additional information may vary depending on whether it is to fill in perception gaps, or to receive acknowledgement messages. The map processing subsystem 309 detects obstructions to the fixed sensors 262 and other reasons that reduce environmental map 324’s completeness. It then selects sections or regions (e.g. grid cells) that correspond with the area that was occluded at operation 602. For verifying the reception of broadcasted/multicast messages (e.g., receiving ACKs), the main system controller 302, object detector 305, or some other subsystem of the RTMS 300 selects test sections or regions (e.g., grid cells) at random, using a round robin scheduling mechanism, based on a user/operator selection, or based on the observed or simulated network traffic. In some embodiments, a network management (NM) entity may instruct the infrastructure equipment 61 to perform the verification for self-optimizing/organizing network (SON) functions.”

“At operation 604, a map processing subsystem 309 and the main system controller 302, object detector 305 or another subsystem of RTMS 300 determine the sub-sections of the coverage area 63 that require additional information. The subsystem can determine the boundaries of sub-sections and regions, depending on the map segments 325 and/or overall map 324 stored in the mapping DB 320.

“At operation 606, an object detector 305 generates or creates a list 64 of objects passing through (or predicted to travel at or close) the selected sections (??L1?). FIG. FIG. 6), which could be based upon ID records 331 from object DB 330. This list L1 could be stored in a suitable structure in object DB 330. The object detector 305 generates or creates 64 objects from L1 in operation 608 that have the capabilities to provide the requested information (?L2?) FIG. FIG. 6), which could be based upon capabilities records 332 from object DB 330. This list L2 could be stored in a suitable structure in object DB 330. Operation 610: The object detector 305 generates or creates a sorted listing of timestamps, object IDs, and sends information requests to each object 64 in L2 (?L3?) FIG. FIG. 6), which could be based upon stored object data records 333, obtained from object DB 330. These records may indicate velocity/speed, position and direction of the objects 64. This list L3 can be stored in a suitable structure in object DB 330. A list of timestamps may also be based upon the speed, position, direction/heading for each object 64 in L2 as detected using sensor data from sensors 262 and/or a known communication delay.

“At operation 612, the inter-object communication subsystem 312 starts broadcasting/multicasting the send information requests at each determined timestamp in list L3. Opera 614 is when the inter-object subsystem 312 determines if there’s still timestamps on the list L3, and operation 612 involves the inter-object subsystem 312 looping back in order to perform operation 612 if they are. If at operation 616 the inter-object communication subsystem 312 determines that there are no more timestamps in the list L3, then the inter-object communication subsystem 312 proceeds to operation 616, which involves the inter-object communication subsystem 312 repeatedly broadcasting/multicasting the send information requests as long as each object 64 in L2 is at or near the selected section or region. At operation 616, the interobject communication subsystem 312 determines if any 64 objects in L2 are located in selected sections or areas. If so, the interobject communication subsystem 312 loops back to perform operation 612. If the inter-object communications subsystem 312 finds that none of the 64 objects in L2 are found at or within the selected sections or areas, then 600 may be completed or repeated as needed.

“II. “II.

“FIG. 7. illustrates an example infrastructure equipment 700 according to various embodiments. The infrastructure equipment 700 (or the?system 700?) may be implemented. The infrastructure equipment 700 (or?system 700?) may be used as described in FIGS. 1 through 6, a base station and radio head, RAN node, such as FIG. 256, or a RAN node. 2, server(s), 260, or any other element/device mentioned herein. The system 700 could also be implemented by a UE.

Summary for “Sensor network enhancement mechanisms”

“The background description contained herein is intended to present the context of disclosure. Except as otherwise stated, the disclosures described in this section do not constitute prior art for the claims made in this application. They are not considered prior art and are not included in this section.

Computer-assisted (or (semi)autonomous) driving (CA/AD), vehicles can include various technologies for perception such as camera feeds or sensory information. The European Technology Standards Institute (ETSI) publishes an Intelligent Transport Systems (ITS) standard, which includes telematics and various types of communications between vehicles (e.g., vehicle-to-vehicle (V2V)), between vehicles and fixed locations (e.g., vehicle-to-infrastructure (V2I)), between vehicles and networks (e.g., vehicle-to-network (V2N)), between vehicles and handheld devices (e.g., vehicle-to-pedestrian (V2P)), and the like. Dedicated short-range communications (DSRC) and/or cellular vehicle-to-everything (C-V2X) protocols provide communications between vehicles and the roadside infrastructure. Cooperative-ITS (CITS) could support fully autonomous driving, including wireless short-range communications (ITSG5) that are dedicated to automotive ITS and road transportation and traffic telematics. C-ITS could provide connectivity between road participants as well as infrastructure.

“Disclosed embodiments relate to sensor networks, in particular sensor networks for vehicular apps. Many vehicle service providers (e.g. mapping, navigation and traffic management) are available. Communication service providers (e.g. C-V2X and DSRC) are also available. Sensor data is used to provide accurate, up-to-date service. The Safespot project has a Local Dynamic Map, which provides real-time information about vehicles and traffic on roads. These services can receive sensor data from both fixed sensor arrays and vehicle-mounted/embedded sensor data. Sensor data can become unavailable at times (e.g., “occlusions”). This could negatively impact the ability of service providers to create maps or provide other services. The data that it stores must be accurate, complete, current, and up-to-date in order to make the infrastructure reliable.

“Various embodiments claim that the infrastructure-based systems’ sensor accuracy is enhanced by information from clients (e.g. vehicles) who are being served. Different from current V2X solutions that require constant signaling between infrastructure equipment and user equipment, in various embodiments clients only transmit information to the infrastructure equipment. The embodiments reduce the communication overhead between clients and infrastructure equipment. Embodiments also include multicast and broadcast communication by infrastructure equipment, which helps to minimize signaling overheads and wireless spectrum congestion.

“In the disclosed embodiments, infrastructure equipment, such as a roadside unit, is communicatively coupled to a sensor array. The sensor array may include one or more sensors that are mounted on infrastructure equipment, and/or one or several fixed sensors located at various locations within a defined coverage area. The sensor array is used by the infrastructure equipment to collect sensor data representative of objects within the coverage area. The infrastructure equipment can also identify regions that are not covered adequately by the sensor array (e.g.?sensor coverage gaps?). or?occlusions ), by identifying gaps in the currently available sensor data (e.g.?perception gaps?). The infrastructure equipment tracks objects, such as vehicles, within the coverage area. The infrastructure equipment will request sensor data from the object’s onboard sensors if it finds an object in a perceived gap area (or is about to enter one). This sensor data is collected by the infrastructure equipment, which then uses it to supplement the existing knowledge about the infrastructure (i.e., filling the perception gaps). Other embodiments can be described and/or claimed.

“I. VEHICLE-TO-EVERYTHING EMBODIMENTS”

“Now, let’s move to FIG. 1. This is an example environment 60, in which different embodiments of this disclosure can be used. Environment 60 is a system that includes sensors, compute units, as well as wireless communication technology. The infrastructure equipment 61a,61b is communicatively connected to the sensor arrays 62a,62b, respectively. Each sensor array 62a,62b includes one or more sensors that are positioned along a section of the physical coverage area 63. A?sector is a section of the physical coverage area that is covered by a single sensor. Sensor arrays 62a,62b detect one or several objects 64a,64b. These objects 64a,64b travel within the respective sections of the physical coverage area of 63. Wireless communication technology may be used to connect the objects 64 a and 64 b with the infrastructure equipment (61 a, 6 b) and each other. The sensor array (62a) includes one or two sensors that provide object identification information to the infrastructure apparatus 61a. While the sensor array (62b) includes one or several sensors that provide object recognition information to the equipment 61b (e.g. via radar, ultrasonic or camera), the sensor array (62) b includes one, more, or all of the following sensors: Infrastructure equipment 61a, 61b may also exchange information regarding the vehicles 64a, 64b they are tracking, and may support collaborative decision-making.

“In this example, objects 64a, 64b are vehicles (referred as?vehicles64a, 64b?) That they are traveling on a road within the coverage area 63 (referred as?road63?). For illustrative purposes, the following description is provided for deployment scenarios including vehicles in a two dimensional (2D) freeway/highway/roadway environment wherein the vehicles are automobiles. The embodiments described herein can be applied to other types vehicles such as trucks, buses, motorboats and motorcycles. They also work with other motorized devices that are capable of transporting people and goods. The embodiments described herein can be applied to three-dimensional (3D), deployment scenarios in which some or all of these vehicles are used as flying objects. This includes drones, aircraft, unmanned aerial vehicle (UAVs) and/or other similar motorized devices.

“The 64 a, 64b vehicles may be any type or motorized vehicle used for transporting people or goods. Each of these vehicles is equipped with an engine and transmission, wheels, axles, wheels, control systems for driving, parking, passenger comfort, safety, and so on. Motor?, motorized?, and others are interchangeable terms. As used herein, the terms?motor?,??motorized?, etc. refer to devices that convert one type of energy into mechanical energy. These include internal combustion engines, compression combustion engines, and electric motors. FIG. One may be used to represent motor vehicles with different makes, models, trims, etc. The wireless communication technology used by 64 a and 64 b may also include V2X communication technology. This allows 64 a and 64 b to communicate with each other as well as with infrastructure equipment 61a, 61b. Third Generation Partnership Project (3GPP), cellular V2X, (C-V2X), technology. This technology allows the vehicles 64 a, 64 b to communicate directly with each other and with infrastructure equipment 61 a, 61 b. The positioning circuitry is used to (coarsely determine) their geolocations and communicate with the infrastructure equipment 61a, 61b in a secure, reliable way. This allows vehicles 64a, 64b to sync with the infrastructure 61a, 61b in a secure and reliable manner.

“The infrastructure equipment 61/21 a, 61/21 b can provide environmental sensing service, and in this case, the infrastructure equipment 61/21 a, 61/21 b could provide environmental sensing service for vehicles 64. Infrastructure equipment 61a, 61b can provide environmental sensing services that may be used to map dynamic environments in real time, such as road 64. Real-time mapping of dynamic environment is used to make high-reliability decisions, such as when 64 vehicles are CA/AD 64 vehicles 64. Intelligent Transport Systems (ITS) may use the real-time mapping to create a local dynamic map (LDM), which structures all data necessary for vehicle operation. It also gives information about highly dynamic objects such as 64 vehicles on the road 63. LDM input can be provided by either user equipment (UEs), equipped with sensors such as vehicles 64 or fixed sensor arrays 62a,62b, which are located along the road 63. No matter what source the sensor data comes from, the environment model created using sensor data must be as accurate and complete as possible to provide reliable real-time mapping services.

The current methods for providing real-time mapping services are based primarily on a complex set sensors in each of the UEs, as well as a non-deterministic group of V2X protocol protocols to enhance understanding of the area of interest. For semi-autonomous or autonomous driving, environmental sensing is achieved by combining various types of sensor data, including radar, light detector and ranging (LiDAR), and visual (e.g. image and/or videos). Differential GNSS is used to improve localization based upon GNSS systems. Correction data from fixed stations with known geopositions can also be provided. These data fusion methods are complex and require large storage resources and high power consumption.

“Some service providers (or developers of applications) rely only on vehicle sensing capabilities to provide real-time mapping services. Real-time mapping that relies on only in-vehicle sensors or computing systems can add significant weight, cost and energy to each 64. A single vehicle 64 may have a limited view of 63, which is in contrast to environmental sensing systems that use infrastructure equipment (61 a, and 61 b), discussed infra. Real-time mapping is not always available for autonomous or semi-autonomous driving applications.

Some mapping service providers try to combine the sensing capabilities from multiple vehicles 64. This allows the vehicles 64 to exchange data in-vehicle. V2X technology, for example, provides lower-level network protocols to allow direct communication between vehicles 64 (e.g. DSRC links, sidelink communications over C-V2X systems’ PC5 interface) and infrastructure 61 a., 61 b, without specifying higher level application logic. Multicast or broadcast protocols are also available in cellular communication systems (e.g. Evolved Multimedia Broadcast and Multicast Service, eMBMS)) to allow one-to-many communication. But, broadcast/multicast protocols and V2X do not have acknowledgement mechanisms. This means that it is impossible to guarantee the timeliness or completeness of the messages received. Real-time mapping services that rely on V2X or broadcast/multicast technologies to share sensor data among 64 vehicles cannot meet the accuracy and completeness requirements of most autonomous and semi-autonomous driving applications.

The disadvantages of the ETSI Intelligent Transport Systems (ITS), technology are similar to those of V2X and other broadcast/multicast technologies. ITS is a system that supports the transportation of goods and people with information and communications technologies. It’s used to safely and efficiently use transport infrastructure and transport methods (e.g. cars, trains and planes, as well as ships and other transport vehicles). ITS infrastructure supports traffic-related events via Decentralized Environmental Notification Messages and Cooperative Awareness Messages. CAMs are messages that are exchanged within the ITS network among ITS stations (ITSSs), to create and maintain mutual awareness and support cooperative vehicle performance 64. Information about road hazards or unusual traffic conditions is contained in DENMs. This includes information such as the type and location of the road danger and/or any abnormal conditions. ITS also contains a Local Dynamic Map, which is a data storage located within an ITS?S that contains information relevant to the operation and safety of ITS applications. LDM is a repository for information about facilities (e.g. Cooperative Awareness (CA), and Decentralized Environmental Notifications (DEN) services) as well as applications that need information on moving objects (e.g. vehicles near or stationary, such as traffic signs). High frequency data/information regarding the location, speed and direction of each vehicle 64 is included in both DEN and CA services. The ETSI ITS works with best efforts and cannot guarantee that all messages will be received on time. Not all 64 vehicles are equipped with ITS-based V2X communication technology in order to send these messages. It is impossible to guarantee the completeness and timeliness in the receipt of CAMs/DENMs because they are sent from 64 vehicles of different makes and models. This means that the source, time, and location information of CAMs/DENMs can be ambiguous. ITS currently does not have a coordinating authority to ensure the accuracy, reliability, and timeliness of information. Real-time mapping services that rely on ITS technologies to share sensor data among vehicles 64 are not able to meet the requirements of most autonomous and semi-autonomous driving applications.

Some service providers use only infrastructure equipment 61a,61b and fixed sensor arrays62a,62b to provide real-time mapping services. Infrastructure-only systems that provide real-time mapping services can’t meet the accuracy and completeness requirements of most autonomous or semi-autonomous driver applications. This is because infrastructure-only solutions are susceptible to occlusions in their sensed environment, such as objects placed in line of sight of one sensor in a sensor array 62a, 62b. This is especially true given the practical limitations in the deployment of individual sensing element at the area of concern.

“Accordingly to different embodiments, real-time mapping services can be provided by infrastructure equipment 61a,61b, which monitors objects 64a,64b using individual sensors within the sensor arrays 662a, 662b. Each infrastructure equipment 61a, 61b includes a map processing subsystem (e.g. map processing subsystem 309 of FIG. 3.), which uses sensor data to determine position, speed, and other properties of moving objects 64a, 64b within the coverage area. It also generates a dynamic map of the coverage region 63 in real time. The infrastructure equipment 61a,61b is communicatively connected to sensor arrays 62a,62b. Sensor array 62a, 62b can detect objects 64a, 64b within the coverage area of 63. Map processing subsystem, e.g. map processing subsystem 309 (FIG. 3), includes an object detector (e.g. object detector 305, FIG. 3) to perform various logic operations in order to detect objects 64 within the coverage area 63, based on sensor data. The map processing system (e.g. map processing subsystem 309 in FIG. The data fuser (e.g. data fuser 352 in FIG. 3) can perform various logical operations to fuse the collected sensor data together. Any suitable technique can be used to fuse the sensor data (e.g., Kalman filters, Gaussian Mixture Models, etc.). Time synchronization may also be used to fuse sensor data using information about the location, speed and size 64 of each object as identified by an object detector (e.g. object detector 305 in FIG. 3). The map processing system (e.g. map processing subsystem 309 in FIG. The map processing subsystem (e.g. map processing subsystem 309 of FIG. 3) can perform various logic operations to generate an overall map covering the coverage area 63. The overall map of the coverage zone 63 can be generated using any technology that is suitable. Data about moving objects 64 can be extracted and combined into one map that includes all moving objects 64 within the coverage area 63. This map will include all objects 64 that are detected by sensors communicatively coupled with the infrastructure equipment 61. This may be represented by an overall map of the area 63. In some embodiments, an object detector (e.g. object detector 305 in FIG. The relative movement of objects 64 and sensors of sensor array 662 may be used to detect sensor blind spots. This may be due to the changing viewing angles of objects 64, as they pass by stationary sensors 64. Some embodiments combine different types of sensors, sensor positions and sensing directions to provide as much coverage as practical. The coverage area 63 may include stationary sensors that can detect moving objects 64. This will ensure that there are no blind spots under normal traffic conditions. This is possible due to most constraints for vehicle-mounted sensors (e.g. weight constraints, space constraints and power consumption constraints, etc.). These constraints do not apply to sensors within stationary sensor arrays (62) located at or near the coverage area 63. These proactive measures may not be enough to eliminate occlusions from the coverage area 63. For example, objects placed in the line-of-sight of sensors in a sensor array (62).

“In some embodiments, the infrastructure equipment (61 a, 64 b) also includes wireless communication circuitry (not illustrated in FIG. 1), which is used for obtaining sensor data from individual objects 64a, 64b and providing real-time mapping data 64a, 64b. In particular, properties of objects 64 a, 64 b under observation are made available as a concise and time-synced map, which the objects 64 a, 64 b can use to aid their trajectory planning or for other applications/services.”

“Accordingly to different embodiments, the map processing circuitry detects gaps within the coverage area 63 (referred as “perception gaps?”). Based on the sensor data. The map processing circuitry also analyzes acknowledgements from selected objects 64a, 64b within the coverage area 63. The map processing circuitry can augment and verify sensor data from fixed sensors of the sensor arrays, 62a, 62b by asking for position data and sensor information from selected moving objects 64a, 64b within the observed area 63. Before sending requests for sensor or position data, the object 64 a and 64 b are identified by tracking 64 a and 64 b using the sensors of the sensor clusters 62a, 62b. The responses from 64 a, 64b can be mapped to the geolocation within the coverage area of 63. This allows infrastructure equipment 61a, 61b to request sensor data or position information from localized objects 64a, 64b. This helps to reduce spectrum crowding while keeping overhead signaling to a minimum. These and other aspects of embodiments described in the present disclosure are further described in the following.

Referring to FIG. 2 illustrates an overview of an environment 200 that can be used to incorporate and use the sensor network technology described in the present disclosure. The illustrated embodiments include a number of vehicles 64 (including the vehicles 64 a and 64 b in FIG. 1), infrastructure equipment (61a, 61b), MEC host 257 and access node 258, network 258, and one to more servers 260.

“The environment 200 could be considered a wireless sensor network (WSN), in which the entities within the environment 200 might be considered?network nosdes? or?nodes. They communicate in multi-hop fashion among themselves. The term “hop” may refer to a single node or intermediary device through which data packets traverse a path between a source and destination device. The term “hop” may refer to a single node or intermediary device that transmits data packets along a path from a source device to a destination device. Intermediate nodes (i.e., nodes that are located between a source device and a destination device along a path) forward packets to a next node in the path, and in some cases, may modify or repackage the packet contents so that data from a source node can be combined/aggregated/compressed on the way to its final destination. FIG. 2. The architecture of environment 200 is a decentralized V2X network that includes 64 vehicles with one or more network interfaces. Infrastructure equipment 61a and 61b act as roadside units (RSUs) in FIG. As used herein, the terms ?vehicle-to-everything? V2X and vehicle-to-everything are interchangeable terms. may refer to any communication involving a vehicle as a source or destination of a message, and may also encompass or be equivalent to vehicle-to-vehicle communications (V2V), vehicle-to-infrastructure communications (V2I), vehicle-to-network communications (V2N), vehicle-to-pedestrian communications (V2P), enhanced V2X communications (eV2X), or the like. This V2X application can make use of “cooperative awareness”. to provide more intelligent services for end-users. The vehicles 64, radio access nosdes, pedestrian UEs, and other devices may gather information about their environment (e.g. from nearby sensors or vehicles) and process that data to create and share more intelligent services such as autonomous driving and cooperative collision warning. Similar to the ITS services, V2X cooperative consciousness mechanisms are similar.

FIG. 2. may be identical or similar to vehicles 64 a, 64b previously discussed and may be collectively called a?vehicle64? Or?vehicles 64. One or more of the 64 vehicles may contain vehicular user equipment (vUE system) 201, one, or more sensors 220 and one, or more driving control units (220). A computing device or system mounted on, embedded, embedded, or otherwise integrated into a vehicle 64 is called the vUE system. The vUE system 201 includes a number of user or client subsystems or applications, such as an in-vehicle infotainment (IVI) system, an in-car entertainment (ICE) devices, an Instrument Cluster (IC), a head-up display (HUD) system, onboard diagnostic (OBD) systems, dashtop mobile equipment (DME), mobile data terminals (MDTs), a navigation subsystem/application, a vehicle status subsystem/application, and/or the like. The term “user equipment” is used. Alternatively,?user equipment? or?UE? The term “user equipment” is also used. Alternatively, the term?user equipment? (or?UE?) may refer to any type of wireless/wireless device or computing device. Any type of wireless/wireless device, or any computing device that includes a communication interface such as the communication tech 250, may be included. Furthermore, the vUE technology 201 and/or communication technology 250 can be called a “vehicle IT-S” when ITS technology is being used. Or simply as an “ITS-S.”

The DCUs 220 are hardware elements that control subsystems of vehicles 64. These elements include electronic/engine controls units (ECUs), electronic/engine management modules (ECMs), embedded system, microcontrollers and control modules, as well as engine management systems (EMS). Sensors 220 can provide sensor data to DCUs 220- and/or other vehicle systems to enable DCUs 220- and/or one/more other vehicle subsystems 64 to control the respective systems. The sensors 220 can sense magnetic, thermal, infrared and/or radar signals, as well as other similar sensing abilities.

“The communication technology 250 can connect, for instance, communicatively couple the vehicles 64 with one of several access networks (ANs), or radio access network (RANs). (R)ANs may include one or more (R]AN nodes such as infrastructure equipment 61a, 61b and RAN node (256) shown in FIG. 2, which allow connections to corresponding networks. The terms “access node”,? and “access point” are used herein. ?access point,? The term “access point” or something similar may refer to network elements or equipment that provide radio baseband functions, wire-based functions for data or voice connectivity between a network with one or more users. The term “network element” is used herein. The term?network element? may be used interchangeably with or referred to as: What is the term “network element?” The term “network element” may refer to a physical computing device that is part of a wired and wireless communication network. It can also be used to describe virtual machines. The AN nodes are also known as base stations (BS), next generation NodeBs(gNBs), RAN Nodes, NodeBs and NodeBs. They can also be called NodeBs, Road Side Units and Transmission Reception Points. The ANs can be used to perform various radio network controller functions (RNC), such as radio bearer management. They also manage radio resources, data packet scheduling, mobility management, uplink/downlink dynamic radio resource management, and data packet scheduling. FIG. 2 shows an example implementation of the ANs. S2.”

FIG. 2. The infrastructure equipment 61 a and 61 b are Roadside ITSSs or roadside units, respectively, while the (R)AN node 256 represents a cellular basestation. The term “Road Side Unit” is used. RSU is the abbreviation for Road Side Unit. Any transportation infrastructure entity that is implemented in an eNB/eNB/TRP/RAN or stationary (or relatively stationary?) UE. The term?Roadside ITS Station’? Refers to an ITS subsystem within the context of roadside ITS equipment. Infrastructure equipment 61a, 61b can be found at roadside and provide transport-based services as well as network connectivity services to passing vehicles 64. Each infrastructure equipment 61a, 61b includes a computing system communicatively connected with individual sensors 262 via an interface circuitry or communication circuitry. In ITS-based embodiments the interface circuitry or communication circuitry of infrastructure equipment 61 a and 61 b can be a road equipment gateway. This is a gateway to specific road side equipment (e.g. sensor arrays 62 a. 62 b traffic lights, barriers, electronic signage, etc.). The infrastructure equipment 61a, 61b can obtain sensor data as well as other data (e.g. traffic regulation data, electronic sign data, etc.). These embodiments may use a well-known communication standard to communicate between the infrastructure equipment 61 a and 61 b as well as the road side equipment (e.g., DIASER or similar). The infrastructure equipment 61 a, 61 b may also include internal data storage circuitry to store coverage area 63 map geometry and related data, traffic statistics, media, as well as applications/software to sense and control on-going vehicular and pedestrian traffic.”

“The interface circuitry connects the infrastructure equipment 61a,61b with individual sensors 262 within sensor arrays 62a,62b. Each sensor 262 covers a specific area of the physical coverage. Individual sensors 262 can include different sensing capabilities such as image, video, radar, LiDAR and ambient light; sound; and others. Consecutive infrastructure equipment 61a, 61b may be deployed so that the sectors of the physical coverage 63 overlap in certain embodiments. This may enable a continuous and substantially complete map to be generated of the coverage area. Interface circuitry collects sensor data from individual sensors 262. This is representative of the sectors covered by individual sensors 262 as well as objects 64 moving within the sectors. The coverage area 63 is used for monitoring/tracking activity. It is limited by the sensing range and observable objects, such as buildings and roads. Other objects, such as roads and geographical features, may limit or even prevent the objects from moving. Sensor data can indicate or represent, among other things, the location, direction, speed, and velocity of objects 64. The RSE 61 computing system uses the sensor data to provide real-time mapping services. This may include computing or generating a map of the coverage area 63, including representations 64 of dynamic objects and their movements. Individual objects 64 may receive the dynamic map or the data used to generate it.

“In some embodiments, computing system 61a, 61b logically divides observation area 63 into individual sectors or two-dimensional (2D) cells. 2D cells can be used if the observation area is 63 is a 2D area or one plane (e.g. a roadway), while 3D cubes can be used if the coverage area 63 contains multiple planes (e.g. overlapping highway intersections and bridges). In some embodiments, each grid cells has the same dimensions and is defined by absolute geolocation coordinates. The computing system of infrastructure equipment 61a, 61b calculates a grid based environment model which is overlayed on top of the observed coverage. Grid-based environment models allow the computing system of infrastructure equipment 61a, 61b to target specific objects 64 in particular grid cells in order to request data from those objects 64.

“Real-time mapping services in embodiments detect obstructions in the observed/sensed environment (e.g. coverage area 63) and request sensor data from selected vehicles 64. These embodiments assign an ID (unique identifier) to each object 64 by the infrastructure equipment 61a, 61b during a handshake (see, e.g. FIG. X2). The unique ID that was assigned to the infrastructure equipment during the initial handshake procedure is used (see FIG. To identify any object 64 at any time, use X2. If object 64 is temporarily occluded, the infrastructure equipment 61a, 61b can perform the handshake procedure. Infrastructure equipment 61a, 61b can request sensor information from specific objects 64 by knowing the unique ID, direction, speed, and location of each object 64.

“The infrastructure equipment 61’s communication circuitry may use the 5.9 GHz DSRC frequency to provide low latency communications for high-speed events such as traffic warnings and crash avoidance. The communication circuitry of infrastructure equipment 61 can also provide a WiFi hotspot in the 2.4 GHz band and/or connectivity to one or several cellular networks for uplink and downstream communications. The computing system, along with some or all of its communication circuitry, may be enclosed in a weatherproof enclosure that can be used outdoors. It may also include a network controller to provide a wired connection (e.g. Ethernet) to a traffic signal control controller and/or backhaul network. The communication circuitry in the infrastructure equipment 61 can be used to broadcast V2X messages 64 to other objects 64, such as pedestrians and other UEs (not illustrated by FIG. 2). Broadcasting can be made possible by a suitable broadcast/multicast mechanism, such as the evolved multimedia broadcast multicast services for LTE (eMBMS). These embodiments may also include access to multiple functionalities, such as a local gateway or V2X application server for LTE (V2XAS), a broadcast multicast center (BMSC) and a multimedia multicast multicast gateway (MBMS GW). In some cases, the infrastructure equipment 61 can also include a traffic offloading function (TOF). This allows the LGW, V2X?AS, BM?SC, MBMS?GW and/or other computational tasks to be transferred to a local MEC host 257.

“The illustrative embodiment of the RAN node (256) is a cellular station. The RAN Node 256 could be a next-generation (NG) RAN Node that operates within an NR or fiveG system (e.g. a next-generation NodeB(gNB), an Evolved UMTS Terrestrial Radio Access Network [E-UTRAN]), or a legacy RAN like a UMTS Terrestrial Radio Access Network or GERAN, a WiMAX RAN Node, or another cellular base station. The RAN node256 can be used as one or more dedicated physical devices, such as a macrocell-based base station or a low power (LP), base station to provide femtocells and picocells. Other embodiments of the RAN node (256) may be implemented using one or more software units running on server computers in a virtual network. This may be called a cloudRAN (CRAN), virtual baseband unit (BB), cloud-based pool (BB), and/or similar. Other embodiments may use the RAN node to represent individual gNB distribution units (DUs), which are connected via an F1 interface (not illustrated).

“Still referring FIG. 2. The Multi-access Edge Computing host 257 (also known as a?Mobile Edge Computing Hosting?) is the Multi-access Edge Computing, (MEC), host. The MEC host, also known as a?Mobile Edge Computing Host?, is a virtual or physical computing system that hosts multiple MEC applications and provides MEC service to them. MEC allows content providers and application developers to access cloud computing capabilities and an IT service environment at the edge. MEC is a network architecture that allows cloud computing capabilities to be used and computing services to take place at the edge. MEC allows applications to run and perform related processing tasks near network subscribers. Also known as “edge users”, MEC is a mechanism that allows these subscribers to access the MEC services. The like. This will allow for less network congestion and may result in better application performance.

“Edge servers are physical devices that operate or implement the MEC host 257, where it is implemented as one or several virtual machines (VMs), or the like. Edge servers can include or be virtualization infrastructure that provides virtualized computing environments. MEC host 257. MEC applications can be run as VMs over the virtualized infrastructure. FIG. FIG. 2 shows that the MEC host 257 and the RAN 256 are co-located. This implementation can be referred as a small-cell-cloud (SCC), when the RAN 256 serves as a small cell base station (e.g. pico-cell or femtocell). or a WiFi AP. Or, a mobile microcloud (MCC), when the RAN 256 serves as a macro-cell basestation (e.g. an eNB or gNB). MEC host 257 can be deployed in many other arrangements than those shown in FIG. 2. The MEC host 257 could be operated or co-located by an RNC. This may be applicable to legacy network deployments such as 3G networks. The MEC host 257 can be deployed at cell aggregate sites or multi-RAT point aggregation points, which can either be found within an enterprise or in public coverage areas. The MEC host 257 can be deployed at an edge of a cell core network in a third scenario. These implementations can be used in follow me clouds (FMC), which allows cloud services running at distributed information centers to follow the CA/AD 64 as they roam across the network.

MEC can be used in V2X situations for advanced driving assistance applications. This includes real-time situational awareness and see-through sensor sharing. High definition local mapping, including the dynamic real time mapping services discussed herein. MEC host 257 hosts MEC apps running various workloads such as Machine Learning, Augmented Reality, Virtual Reality, Artificial Intelligence, and Data Analytics. Privacy enforcement is also available for data streams destined for the cloud. MEC applications can share data either directly or through an MEC V2X interface (API).

According to different embodiments, MEC host 257 can be used for real time mapping application computation offloading. The MEC host 257 executes computationally-intensive tasks while the vUE systems 201 of 64 perform less computationally intensive functions. The communication technology 25 0 may transmit traffic data and sensor data from the 64 vehicles to the MEC hosts 257. In some embodiments, MEC host 257 may then aggregate these data and distribute them to 64 vehicles via RAN node 2256 and infrastructure equipment 61a, 61b. MEC host 257 is able to offload compute-intensive tasks, as it has greater/greater performance capabilities than the vUE systems 201 of vehicles 64. MEC can also be used to offload computationally-hungry applications, portions thereof, intermediate data processing apps, or portions thereof. It may also be used for offloading data processing applications, or portions thereof. Applications that are computational-hungry have high data processing and large data transfer requirements. Examples include graphics/video processing/rendering, browsers, artificial/augmented reality, cloud-based gaming apps, three-dimensional (3D), gaming, and other applications. Applications with intermediate data processing requirements have large data processing and/or high data transfer requirements, but are less stringent that computation-hungry ones. These include, for instance, sensor data cleansing (e.g. pre-processing and normalization), video analysis and value-added service (e.g. translation, log analytics and the like). Moderate data processing apps have lower data processing requirements and/or data transfers than intermediate data processing programs, such as antivirus software. Examples of compute-intensive tasks in real-time mapping services include tasks for sensor data collection, sensor data fusion and map generation.

A new instance of an app is created at the MEC host 257 to perform computation offloading. This happens in response to one or more requests 64. The MEC host 257 may be selected by a MEC system (e.g., included in the server(s) 260) to start the instance of the application based on a set of requirements (e.g., latency, processing resources, storage resources, network resources, location, network capability, security conditions/capabilities, etc.) The application must meet these requirements. The user(s) requests are fulfilled by the establishment of connectivity between 64 vehicles and the instance at the MEC host 257 using the communication technology 250. When all users have disconnected from the particular instance of the application, the application instance will be terminated.”

“Still referring back to FIG. 2. The network 258 includes computers, network connections between them, and software routines that enable communication between them over network connections. In this regard, the network 258 comprises one or more network elements that may include one or more processors, communications systems (e.g., including network interface controllers, one or more transmitters/receivers connected to one or more antennas, etc. Computer-readable media. These network elements include wireless access points (WAPs), home/business server (with or without radio frequency communications circuitry), routers and switches, hubs radio beacons base stations, picocell base stations and/or other similar devices. The connection to network 258 can be made via either a wired or wireless connection, using any of the communication protocols described in the following. As used herein, a wired or wireless communication protocol may refer to a set of standardized rules or instructions implemented by a communication device/system to communicate with other devices, including instructions for packetizing/depacketizing data, modulating/demodulating signals, implementation of protocols stacks, and the like. In a communication session between illustrated devices, more than one network can be involved. The computers may need to execute routines that enable the connection to network 258. These routines could include the OSI model of computer networking, or the equivalent in a wireless (cellular phone network) network. The network 258 can be used for relatively long-range communication, such as between one or more servers (260) and one or several vehicles 64. The network 258 could represent the Internet, one, more cellular networks or large area networks. It may also include proprietary or enterprise networks, transfer protocol (TCP), Internet Protocol (IP),-based networks, and combinations thereof. The network 258 could be associated with a network operator, who controls or owns equipment necessary for providing network-related services. This includes one or more base stations, access points, one, or more servers to route digital data or phone calls (for instance, a backbone or core network).

“Still referring back to FIG. 2. The one or more server(s), 260 include one or several physical and/or virtualized system for providing functionality or services to one or many clients (e.g. vehicles 64) over an internet (e.g. network 258). The server(s), 260 could include various computer devices that have a rack computing architecture, tower computing architecture, blade computing architecture and/or similar. The server(s), 260 could be a cluster of servers or a server farm. The server(s), 260 could also be connected to or associated with one or several data storage devices (not illustrated). The server(s), 260 could also include an operating system (OS). This provides instructions for general administration and operation of individual server computers. It may also include instructions stored on a computer-readable medium that can store instructions that allow servers to perform their intended functions when executed by the processors. People with ordinary skill in art can easily implement suitable implementations of the OS and general functionality for servers.

“Generally, server(s), 260 offer services or applications that use IP/network resources. The server(s), 260 might provide traffic management services and cloud analytics, content streaming services and immersive gaming experiences. They may also offer microblogging and/or social networking services. The server(s), 260 can also provide services such as initiating and controlling software or firmware updates for individual components and applications 64. The server(s), 260 may also be configured for support of communication services, such as Voice-over Internet Protocol (VoIP), PTT sessions and group communication sessions, for vehicles 64 over the network 258. The server(s), 260 could be configured to act as a central ITSS. This provides centralized ITS applications. These embodiments may have the central ITSS playing the roles of traffic operator, road operator and/or services provider. The central ITS-S might also need to be connected with other backend systems through a network such as network 258. Specific instances of the central ITS-S might contain groups of applications or facilities entities to meet deployment and performance requirements.

Referring to FIG. 3 illustrates a component view, according to different embodiments, of infrastructure equipment 61 that includes a Real-Time Mapping Service 300. The RTMS 300 can be included in infrastructure equipment 61 a or 61 b, depending on the embodiment. (hereinafter?infrastructure gear 61?). RAN node 262, or any other suitable device or system. Other embodiments allow for some or all aspects of the RTMS 300 to be hosted by a cloud computing provider, which can interact with equipment 61, deployed RMTS appliances, or gateways. The RTMS 300 consists of main system controller 302, object detection 305, messaging subsystem 306, handshake subsystem 306, messaging system 307, map processing subsystem 309, mapping database (DB), 320 and object DB 330. Map processing subsystem 309 contains a map segmenter 346 and a data fuser 352, as well as a map generator 382. Infrastructure equipment 61 also includes a sensor subsystem 310 and inter-object communication subsystem 312. Remote communication subsystem 314 are also included. Other embodiments of the RTMS 300 or infrastructure equipment 61 could include additional subsystems than those shown in FIG. 3.”

“The main controller 302 is designed to manage the RTMS 300. This includes scheduling tasks to execute, managing memory/storage resources allocations, routing inputs/outputs between entities and the like. The main system controller 302 can schedule tasks using a suitable scheduling algorithm and/or implement a suitable message pass scheme to allocate resources. The main system controller 302 can operate an operating system (OS), to allocate computing, memory/storage, networking/signaling resources. The main system controller 302 may be configured to allow intra-subsystem communication among the various subsystems within the RTMS 300 by using appropriate drivers, libraries and application programming interfaces (APIs), middlewares, software connectors, glue, and the like. The main system controller 302 is also configured to control communication of application layer (or facilities layer) information with objects 64, such as sending/receiving requests/instructions and data (e.g., ACKs, position information, and sensor data), including functionality for encoding/decoding such messages.”

“Continuing with FIG. 3. The object detector 305 is designed to detect, monitor and track object(s), 64 within a coverage area of 63. Based on the received sensor data, the object detector 305 can detect, track, and monitor the object(s), 64. The object detector305 can receive sensor data from sensors 262 using the sensor-interface subsystem 312. In some embodiments, the object detector 305 may also receive sensor data stored by infrastructure equipment 361 via remote communication subsystem 314. The object detector 305 can also be configured to receive sensor data from observed objects 64 using inter-object communication system 312. As mentioned previously, the definition of coverage area 63 can vary from one embodiment to the next. It is dependent on the application and limited by the sensors 262, as well as the sensor data limitations. The object detector 305 tracks 64 objects and determines vector information, such as travel direction, speed, velocity, acceleration, etc. Information about the objects 64. The object detector 305 can use one or more of the following object tracking and/or computer visual techniques to track them 64: a Kalman filter. A deep learning object detection technique is available (e.g., fully-convolutional neural network, region proposal convolution neural net (R-CNN), single-shot multibox detector,?you just look once?). (YOLO) algorithm, etc. ), and/or similar. Some of these techniques use identifiers (also known as “inherent IDs”). or similar) to track objects 64 detected in video or other sensor data. These inherent IDs may be stored in object DB 330 by the object detector 305 in these embodiments. These inherent IDs can be linked with unique IDs that are assigned to detected objects 64, as discussed infra.

The object detector 305 can use additional mechanisms to aid in the detection and monitoring 64 of the objects. The object detector 305 could detect and track objects 64 by using known received signal strength indicator (RSSI), calculations of one or more signals from the observed objects 64, triangulation and/or dead reckoning. Another example is that the object detector 305 could use additional information to detect and track objects 64 such as path loss measurements and signal delay time, signal-to-noise ratio, signal-to-noise plus interference ratio, and directional signaling measurements.

“Continuing with FIG. 3. The sensor interface subsystem 310 communicates with the infrastructure equipment 61, the RTMS 300 and the sensor array 62 and facilitates communication between sensors 262 in the sensor array 62 and actuators 322. Sensor interface subsystem 310 can receive data from sensors 262 or actuators 322, and send commands to sensors 262 or actuators 322 to control the sensors 262/322. Examples of commands sent to sensors 262 or actuators 322 include commands to calibrate the sensors and actuators, and to transmit commands to them 262 or actuators 322. These commands can be used to operate/control the sensors 262 or actuators 322. In some embodiments, the sensor interface subsystem 310 can be configured to allow inter-device communication according to one or more industry standards. These include cellular, WiFi and Ethernet short-range communications, personal area networks (PAN), Controller Area Networks (CAN) or any other suitable standard or combination thereof. FIG. FIG. 3 shows the sensor array 62, which includes the sensors 262 as well as the actuators 322. The sensor interface subsystem 310 includes various electrical/electronic elements to interconnect the infrastructure equipment 61 with the sensors 262 and actuators 322 in the sensor array 62, such as controllers, cables/wires, plugs and/or receptacles, etc. The sensor interface subsystem (310) may also include wireless communication circuitry that wirelessly communicates with the sensors 262 or actuators 322 of the sensor array 62. The sensor interface subsystem 310 in ITS implementations may be a roadside ITSS gateway or a roadway data gateway. This gateway connects components of roadside systems, such as sensors 262 and actuators 322 in the sensor array 62.

“The sensor interface subsystem 311 and main system controller 322 are devices that measure or detect state changes and/or motions within the coverage area 63 and provide sensor data representing the detected/measured differences to the object detector. The sensors 262 may include one or more motion-capture devices. These devices are designed to detect a change of position 64 relative to its surroundings. They also provide sensor data representative of detected/measured changes to the object detector 305 via the sensor interface subsystem 310 and the main system controller 302. The detection of motion, or change in motion, as well as speed and direction, may be made by reflecting visible light (or opacity), sound, ultraviolet light, sound and microwaves, as well as near-IR and near-IR waves and/or other suitable electromagnetic energy. Depending on the sensor 262, there may be electronic elements such as radar, LiDAR and visible or ultraviolet light cameras, thermographic cameras (IR), etc. For example, there are transmitters and waveguides as well as duplexers and receivers (e.g. radar signal receiver, photodetectors or the like), scanners, beam splitters, signal processing or DSPs, MEMS devices, scanners, beamsplitters, signal processors/DSPs, energy sources (e.g. illumination sources, laser projectors or IR projectors), etc. Antenna arrays that include individual antenna elements and/or similar elements. You may also use other types of sensors 262 in other ways.

“Actionators 322 are devices responsible for controlling and moving a system or mechanism. The actuators 322 can be used in various ways to alter the operational state, such as on/off, zoom, focus, etc. The sensors can be moved, oriented and/or placed in a variety of ways. The actuators 322 can be used in some cases to modify the operation of other roadside equipment such as traffic lights, gates, and digital signage. The actuators 322 can receive control signals from RTMS 300 through the sensor interface subsystem 310 and convert that energy into an electrical or mechanical motion. Control signals can be low-energy electric voltages or currents. The actuators 322 may be electromechanical relays or solid state relays that are used to control electronic devices and/or motors.

“Continuing with FIG. The handshake subsystem 306 allows for a handshake with detected objects 64 using inter-object communication subsystem 312, and may also include the use of one or more communication protocols, as described in greater detail below. Each detected object 64 is assigned a unique system-wide ID. The detected objects 64 then inform infrastructure equipment 61 of the objects 64’s capabilities in wireless communication, self-localization and environment sensor readings. An object 64 may receive multiple handshakes as it travels within the coverage area 63. A handshake may be initiated between object 64 and infrastructure equipment 61 once the object 64 passes through the sensors 262. This could occur when an object 64 enters an intersection. Handshake subsystem 306 can repeat the handshake procedure with selected objects 64, provided that these objects pass a predetermined number sensor arrays 62a,62b and/or infrastructure equipment (61a,61b) to calibrate the sensors 26.2 If the object 64 is temporarily occluded, and later detected by the infrastructure equipment (61a, 61b), the handshake procedure may be repeated with 64.

“In different embodiments, the infrastructure equipment 61 learns the capabilities of tracked objects 64. This allows for only the appropriate objects 64 to request sensor data and position data. These capabilities can include, but not be limited to, geo-positioning capabilities that indicate whether there are any types of positioning system implemented by objects 64, wireless communication abilities that indicate the type of communication circuitry used by objects 64, sensing capabilities that indicate the types and ranges of the sensors and precision of those sensors. The infrastructure equipment 61a, 61b can broadcast or multicast map information by combining location/position information from 64 objects with the tracking of 64 objects using sensor data from individual sensors 262. This helps to minimize latency in wireless data exchange.

“As we have already mentioned, each object 64 is assigned a unique ID by the handshake subsystem 306. This unique ID is used for broadcasting or multicast data (e.g. RTMS data), to the objects 64. Each object 64 is assigned a unique ID by the handshake subsystem 306. It uses, for example, Universally Unique Identifiers (UUID), algorithms, or any other suitable mechanism. It does not need to be unique globally. Instead, the unique ID may be unique locally (i.e. within the coverage area 63). Locally unique IDs may be reused after an object 64 leaves the coverage region 63. The ID does not need to be unique among objects 64 within the coverage area. One example implementation allows for 16 bit IDs to allow 65536 unique values. Privacy concerns can be alleviated by randomly changing 64 unique IDs during handshakes. The object DB 330 is saved in the object DB 306 after the unique ID has been assigned to an object 64. The object detector 305 also continuously tracks an object 64 with the help of sensors 262 via sensor interface subsystem310 and provides updated position information to the map processing system 309.

“The handshake subsystem 306 is also responsible for managing the storage of unique IDs. The object detector 305 employs an object tracking algorithm such as a Kalman filter or Gaussian Mixture Model to detect objects 64 in video or other sensor data. These embodiments allow the object detector to 305 to store the unique IDs assigned by the tracking algorithm along with the records 331 (also known as ID records 331?). Or something similar. The object DB 330 also stores object capabilities acquired during the handshake process in records 332 (also known as?capabilities record 332?). Or the like) and stores object data (e.g. velocity/speed, position/direction, size, etc.) After the handshake procedure, the object 64 records 333 contain the data. The object DB 330 also stores 334 records that indicate message types and the message content to be returned by 64 objects. This allows the main controller 302 to send information requests 64 to the objects 64 by using single bit flags or triggers. The object DB 330 and records 334 store relations between unique IDs as well as message types, provided that the objects 64 are within the communication range of infrastructure equipment 61 a and 61 b.

“In certain embodiments, the message content and/or type of message may be represented in an object list. This object list contains one record for each detected object 64 and its attributes. An object list can allow messages to specific objects 64 to be embedded in the normal payload, since the recipients of information are the same object as the data object within the message. Table 1 infra shows an example object list.

“Continuing with FIG. 3. The map processing subsystem 309 contains a map segmenter 346 and a data fuser 352. A map generator 386 is also included. Data fuser 352 is technology that combines sensor data from sensors 262 or sensors mounted on/on objects 64. The data fuser 352 may contain technology for sensor detection and/or data gathering. It may also combine/process data to prepare data for map generation. The map generator 386 uses technology to create an environmental map 324 covering the coverage area 63 using the combined sensor data from data fuser 352, as well as control storage of the map 324 within the mapping DB 320. The map segmenter 346 uses technology to split the environmental map 324 generated by the map generator 386 into multiple map segments 325. The map segmenter 346 can be configured to annotate two or more map segments 325, with information 64 for each object, to create individualized environmental maps. The map segmenter 346 might assign a unique identifier for each of the two or three map segments 325 that correspond to a particular location on the environmental map 324. The map segmenter 346 can be further configured to group the one or several objects 64 into the two, or more, map segments 325. This is based on the respective locations of each of the one or two objects 64 and the locations of the two or three segments in the environment map 324. The mapping DB 320 could correspond to an LDM repository in ITS implementations.

“Some embodiments can provide an infrastructure-aided fog/edge dynamic mapping for autonomous driving or manufacturing (e.g. automated warehouses). Some embodiments, for example, may offer a platform that can serve dynamic maps to individual drivers in CA/AD and AV vehicles 64. Autonomous can be used to refer to fully autonomous or partially autonomous. High-reliability decision-making systems may require real-time mapping in a dynamic environment. For example, assisted/autonomous driving may require that in-vehicle processing is not sufficient to create a complete and accurate real-time object detection and tracking map of the surrounding environment. Some embodiments provide infrastructure (e.g. a roadside system), to enhance in-vehicle process for better map generation and object tracking.

“Some embodiments may allow for unique labeling of objects 64 by infrastructural sensor 262, map segment tag and/or remote updates. This is combined with a low overhead handshake protocol that the handshake subsystem 306 facilitates between the infrastructure equipment 6 and the objects 64. Certain embodiments may offer an enhanced or optimal portion and detail of each object 64’s high resolution map. This is to ensure full coverage without adding additional processing load. Relevant performance indicators, such as in the context CA/AD vehicles 64, may include precision, timeliness and adequate coverage (e.g. the entire width of the road, or production line). These performance indicators may not be optimized or improved. For example, an CA/AD 64 might use sensor data onboard and attempt to integrate this data into an environment model that is based on high resolution maps. Some other systems may not be available onboard. This could be due to limitations in sensor technology, road bends, infrastructure and/or weather conditions. Aerodynamics and other design constraints may limit the physical space available for mounting sensors. Each vehicle may be heavier, more expensive and consumes more energy if it has additional sensors or compute power. One embodiment may enhance the ability to generate an individual map for a CA/AD vehicle 64 using broadcast data from a collaboration infrastructure. The collaborative infrastructure may be built on a high-resolution map, but may also use fixed sensors to track the road. All vehicles may have access to a global consistent environment model. It is possible to shift more compute power to infrastructure equipment 61 (or MEC host 257) with roadside sensors 262 in order reduce the need for complex sensors and/or compute capabilities in the CA/AD 64.

“Continuing with FIG. The inter-object communication system 312 is designed to facilitate communication between observed objects 64. Inter-object communication subsystem 312, which receives data from 64 observed objects, broadcasts or multicasts messages 64 to observed objects 64 in order to perform handshakes with them and/or request data. Inter-object communication subsystem 312 supports communication between infrastructure equipment 61, observed objects 64, in accordance to one or more industry standard(s), such as cellular specifications provided under the Third Generation Partnership Project (3GPP), New Radio (NR) or Long Term Evolution standards, a Local Area Network ( WiFi standard specified by a suitable IEEE 802.11, a short-range communication standards such as Bluetooth/Bluetooth Low Energy, ZigBee or Z-Wave, or any other suitable standard or combination thereof, With the help of inter-object communications subsystem 312, the object detector 305 or other subsystems are further configured to scan and determine whether the 64 observed objects support a specific inter-device communication standard. The scan can be performed during a listen before talk (LBT), to identify an unoccupied channel. C-V2X implementations could have the scan/discovery include, for instance, asking V2X (or proSe) capabilities or permissions directly from objects 64, or from a V2X control function (or ProSe function) located within a core network. With the help of inter-object communications subsystem 312, main system controller 302 and object detector 305, other subsystems may be set up to authenticate the 64 observed objects. This will confirm that all 64 objects have the appropriate communication and autonomic capabilities. After authentication of other objects 64, main system controller 302 can control inter-object communication subsystem 312, to exchange authentication information. This may include identification and/or security data. This information can be exchanged securely in some embodiments according to a mutually supported communication protocol. The authentication information could be encrypted before being sent to the objects 64 or similar.

“Accordingly to different embodiments, the messaging system 307 with the support of the inter-object communications subsystem 312, broadcasts/multicasts messages to request information from objects 64. This may be called?sendinformation requests?. These embodiments use the messaging subsystem 307. It is designed to encode and broadcast/multicast messages and decode messages received from individual objects 64. In some embodiments, the messaging subsystem 307 generates an object list indicating all observed objects 64 in the coverage area 63, which is then broadcasted/multicast to all observed objects 64 in the coverage area 63. The object list is sent to the recipients on a regular basis, at a time that suits their navigation needs. Each object 64 contains a set or data elements (DEs), which are necessary for reliable navigation decisions. These include an assigned unique ID, position (e.g. GNSS geolocation), direction and speed, vehicle size, vehicle type, and/or the like. The object list’s set of attributes includes send information request attributes and DEs in various embodiments. Each recipient object 64 is included in embodiments as a data record that forms a map of all moving objects 64 within the coverage area 63. The send information requests 64 for individual objects can be embedded into existing attributes 64. The send information requests can be Boolean attributes, such as a send ACK attribute that instructs the object 64 with an acknowledgement message, a Send Sensor attribute that instructs the object 64 with its own sensor data, and a Send Position attribute to instruct 64 with its own position data (e.g. GNSS or GPS coordinates). The object list is sent to 64 observed objects. The 64 objects may then search the object list for the corresponding records using the unique ID assigned to them 64 during the initial handshake process. These embodiments may save computational resources because the attributes and/or DEs of the object list, including send information requests, could be identical for each object 64. Additionally, using broadcast/multicast technologies may allow the infrastructure equipment 61 to reduce communication/signaling overhead. Any suitable markup language or schema language may be used to create the object list. In some embodiments, the object lists contain documents or data structures that can be interpreted using the subsystems in RTMS 300. These include XML (or any variant thereof), JSON (or any variation thereof), IFTTT, (If This Then That). A JSON example object list in human-readable format is shown by table 1. This table includes an example of data sizes for each field in pseudo comments. Table 1 shows an example object list in JSON human-readable format. It also includes pseudo comments that show data sizes for each field.

“Table 1 contains a?Map Section?. This section indicates the individual map segments 325 that have been segmented using the map segmenter 346. The overall environmental map 324 in this example is rectangular. If the map 324 has curved roads, the rectangle will be sufficient to accommodate the curves. The overall environmental map 324 is divided into segments 325 of equal sizes. These segments are then equally cut in such a manner that the sequence 325 follows the main driving directions for the map segment 325. You can also use other representations of the map segments 325 in different ways, such as using different sized and shaped map segments for complex interchanges or in cities.

“The table 1 example also contains a single object section?” In the?object list The message’s first section contains a list of attributes that correspond to each object 64. This single object section contains, inter alia: an ID DE, timestamp DE and a?SentAck?? DE, a SentSensor? DE, a?SentSensor? DE. DE. The timestamp DE contains a timestamp which may be equalized by or synchronized with the infrastructure equipment 61 to ensure that each object 64 doesn’t need its own timestamp. The?SentAck is a one-bit DE. The?SentAck?DE is a one-bit DE that, when set at?TRUE?, sends back an ACK message to the object 64. or a value greater than?1? requests that the object 64 send back an ACK message. The?SentSensor is The?SentSensor? is a one-bit DE that when set to TRUE, will send additional information about the object 64. or a value greater than?1?, asks object 64 for additional information about its sensor values and/or captured data. The?SentPosition The?SentPosition? DE is a single bit DE. When set to TRUE, it will be returned. Requests object 64 to send back its position data, such as using on-board positioning circuitry or other methods, if set to?TRUE? The object 64 with the unique ID DE is identified in the object list. It searches the message for its unique ID and checks the status of any send information request attributes. If any send information request attributes have been set to true, it generates and sends the reply to infrastructure equipment 61. The agent stops transmitting relevant data to infrastructure equipment 61 if any of the send request attributes is set to false. The messaging subsystem 307 calculates the time for the relevant send-information requests in advance. It corresponds to a particular geographic location where object 64 will be found.

“The?single-object section? “The?single object section? This example message also contains a type DE, position DF and velocity DF. These DEs/DFs can be used by objects 64 to place the subject object 64 on their locally generated maps. This example uses type DE to indicate a vehicle type. It can also be used in the lookup table for vehicle sizes. This example assumes that vehicle sizes can be divided into 256 types of vehicle, which results in 8 bits of data. Trailers, however, are considered separate objects 64.

“The Cartesian (X, Y), coordinate DF is located within the boundary 325 of the map segment (325). (e.g. the?points? DF and SegmentSize DF in table 1. Other embodiments allow for the inclusion of GNSS latitude and longitude values in the position DF. The transmitted map segment 325 has been divided into polygons. Therefore, the position values can also be given in relative form according to the size and location of the bounding segments. This allows for a significant reduction in data size. Cartesian coordinates can be used to reduce the message size and signaling overhead. The maximum number of Cartesian coordinates that must be stored is 100000, for a maximum 325 map segment size of 1 km (kilometer). The Cartesian coordinates may be stored in two 17-bit DEs. One DE for each X axis and another DE for each Y axis in Cartesian coordinate systems. The sigma DE, which represents a variance in latitude and longitude values could be stored in the same size DF. In some embodiments, the same sigma may be used for both x and y axis. This results in 51 bits per position, as opposed to 192 bits in conventional message.

“Direction is determined by velocity DFs (e.g. the?velocity_north?) DF and the velocity_east? In the example in table 1, DF is used. This example codes the speed and direction together with two scalars: one for the X-axis and one to the Y-axis. These numbers can be used to calculate direction. The vector addition of these two values, v=, is possible. The speed is indicated by the square root of (x2+y2), while the direction is indicated by the right arrow (V )=(Vx).} Using”

“100 ? “100? The granularity of ”

“0.1 ? ? M snis was used to store a maximum value up to 1000 in both x- and y directions. This value can be stored in 10 bits DE, which results in 20 bits for speed and direction.

“Remote communication subsystem 314, as mentioned earlier, is designed to facilitate communication between one or more remote servers 360 or other infrastructure equipment 361. Remote servers 360 can be identical or similar to the server(s), 260 in FIG. 2. It may include one or more servers that are associated with a mobile network operator platform, service provider platform, cloud computing, traffic management service and energy management service. ), a law enforcement agency or government agency, an environment data service, etc. Other infrastructure equipment 361 could be identical or similar to infrastructure equipment 61 (with associated sensors arrays 62, and so forth). It is deployed in a different geographical location from the infrastructure equipment 61. The remote communication subsystem 314 supports communication between infrastructure equipment 61, servers 341 and other infrastructure equipment 361 according to one or more industry standards. For example, WiMAX standards or 3GPP NR specifications. A LAN, WLAN, or WLAN standard such WiFi specified by an IEEE 802.3 standard. Or Ethernet specified by an IEEE 802.3 standard.

“Accordingly to different embodiments, subsystems of infrastructure system 61 or RTMS 300 fill in perception gaps in real-time dynamic maps created by the map processing subsystem 309 and updated by handshake subsystem 307. The handshake subsystem 305 performs a handshake procedure (e.g. handshake procedure 500 in FIG. 5) with objects 64 within the coverage area. The object detector305 continuously tracks all moving objects 64 within the coverage area. The object detector305 and/or map processing subsystem 309 select individual objects 64 to receive sensor data to augment the real-time dynamic maps. The messaging subsystem 307 generates/encodes request messages that can be broadcast or multicast to selected objects 64. The messaging subsystem307 and/or map processing subsystem 309 process the response messages from selected objects 64.

“Various embodiments of the RTMS 300 identify subsections or areas of the coverage area 63. The object detector 305 and/or an additional subsystem of RTMS 300 select individual objects 64 from a database of 64 objects being tracked (e.g. which is stored in object DB 330). These embodiments enable the infrastructure equipment 61 improve environmental perception by filling or supplementing perception gaps. For example, augmenting dynamic maps with sensor readings of the tracked objects 64. The infrastructure equipment 61 can also verify that messages are received by the objects using multicast or broadcast protocols. This is a feature lacking in many current communication protocols. 64 objects are selected for send information requests by the map processing subsystem 309. This selects one or more areas or sections of the coverage area (e.g. one or two logical grid cells from the environmental model) that require additional information. Depending on whether you are filling in gaps or receiving acknowledgement messages, the selection of sections or regions can be different. The map processing subsystem 309 uses the known mechanisms to detect obstructions to the fixed sensors 262 and other reasons that reduce environmental map 324’s completeness. It then selects the sections or regions (e.g. grid cells) that correspond with the occluded areas. For verifying the reception of broadcasted/multicast messages, the main system controller 302, object detector 305, or some other subsystem of the RTMS 300 selects one or more test sections or regions (e.g., grid cells) at random, using a round robin scheduling mechanism, based on a user/operator selection, or based on the observed traffic situation. The simulation results can be used to determine traffic density or other conditions that could affect wireless communication coverage.

After selecting the sections, regions, or grid cells, the object detector305 consults the 64-tracked objects 64 to create a list 64 of objects that are likely (or predicted) to) pass through the section within a time period. The object detector 305 then generates a second listing of 64 objects that possess the technical capabilities necessary to provide the requested information. The object detector 305 calculates a third list based on the speed of each object 64 from the second listing and any known communication delay. These timestamps are used to indicate when an object 64 should receive a send request. As long as the object 64 on the second list is within the designated section or region, the send information request can be repeated.

According to different embodiments, the RTMS 300 could use data in response messages from tracked objects 64 in multiple ways. For example, when response messages contain sensor data, or when they include receipt ACKs. The RTMS 300 can establish trust in the object sensor data by combining the sensor data with fixed sensor 262 data to create a new result set. Multiple agents may be able to use their responses to establish a temporal and spatial overlap between the section or region being sensed. This will increase trust in the information from objects 64. Incorrect information from objects 64 may result in them being excluded. The overlap check results are then merged with observations based upon sensor data from sensors 262. A trust mechanism that allows for only the 64-occupied areas of sensor data received may be used. If an object 64 is detected in the coverage area 63 using sensor data from sensors 262, sensor data received from object 64 is not used to determine the object’s position. Data fusion is performed on an additional object that was not detected with the infrastructure equipment 61. You can increase the total number of objects 64 by getting replies from objects 64, but not reduce it. The tracked objects 64 receive a new result in the form of a map of all dynamic items.

If the reply messages include receipt ACKs or if such ACK messages are not present, infrastructure equipment 61 can initiate measures to increase reception in the observed coverage area 63. This could include changing or adjusting network configuration parameters, including antenna parameters (e.g. antenna tilt, azimuth), etc. The downlink transmit power, the on/off state of the infrastructure equipment (61), handover-related parameters and/or other parameters may be affected. One example of network configuration adjustment is a coverage optimization (CCO), function that changes the signal strength of a cell or interfering cell nearby by changing antenna tilt or power settings to improve radio link quality. You may also need to consider parameters that are part of the same infrastructure equipment (61), or different equipment 61, or network elements that have an impact on each other. Another example of network configuration adjustments is activating multipath communication and beam steering, repetition messages, or combination of various communication technologies. The network configuration parameters for one wireless communication cell could also include or involve the parameters of neighboring cells. TX power, antenna azimuth, and tilt are all examples of association. Additional details regarding coverage issues and mitigation options can vary from one embodiment to the next, depending on which wireless technology is being used. Infrastructure equipment 61 can request multiple receipts ACKs from multiple objects 64 before it adjusts network parameters. There may be multiple reasons why an ACK is not received. Multiple objects 64 may be asked to send multiple receipt ACKs in some instances. Infrastructure equipment 61 could then alert another subsystem (such the inter-object communications subsystem 312, remote communication subsystem 314, and the RAN node 262 of FIG. 2) regarding issues with wireless coverage.

“In ITS-based implementations some or all the components depicted in FIG. “In ITS-based implementations, some or all of the components depicted in FIG. 3 could follow the ITS communication protocol (ITSC), which is based upon the OSI model. Layered communication protocols are extended to ITS applications. The ITSC includes, among other things, an access layer that corresponds to OSI layers 1 through 2, a networking & transportation (N&T), layer that corresponds to OSI layers 3 through 4, a facilities layer which corresponds to OSI layers 5, 6 and at least some functionality from OSI layer 7, and an application layer. Each layer is interconnected using respective interfaces, service acces points (SAPs), or APIs. These implementations may include RTMS 300 or part of the facilities layer. Additionally, aspects of the sensor subsystem 310 and inter-object communication system 312, as well as the remote communication subsystem 314 could be part the N&T or access layers.

“The facilities layer includes middleware, software connectors and software glue. Multiple facilities can be included. The facilities layer includes functionality from the OSI application, the OSI presentation layers (e.g. ASN.1 encryption and decoding and encryption), and the OSI session layers (e.g. inter-host communication). A facility is a component which provides functions, information and/or services to applications in the application layer. It also exchanges data with lower levels for communication with other ITSSs. Table 2 lists the common facilities, while table 3 lists the domain facilities.

“TABLE 2\nCommon Facilities\nClassification Facility name Description\nManagement Traffic class Manage assignment of traffic class\nmanagement value for the higher layer messages\nID Manage ITS-S identifiers used by\nmanagement the application and the facilities\nlayer.\nApplication Manage the AID used by the\nID (AID) application and the facilities layer.\nmanagement\nSecurity Deal with the data exchanged\naccess between the application and\nfacilities layer with the security\nentity.\nApplication HMI support Support the data exchanges\nSupport between the applications and\nHuman Machine Interface\n(HMI) devices.\nTime service Provide time information and time\nsynchronization service within the\nITS-S. This may include providing/\nobtaining the actual time and time\nstamping of data.\nApplication/ Manage and monitor the functioning\nfacilities of active applications and facilities\nstatus within the ITS-S and the\nmanagement configuration.\nSAM Support the service management of\nprocessing the management layer for the\ntransmission and receiving of the\nservice announcement message\n(SAM).\nInformation Station Manage the ITS-S type and\nSupport type/ capabilities information.\ncapabilities\npositioning Calculate the real time ITS-S\nservice position and provides the information\nto the facilities and applications\nlayers. The ITS-S position may be\ngeographical position (longitude,\nlatitude, altitude) of the ITS-S.\nLocation Calculate the location referencing\nreferencing information and provide the location\nreferencing data to the applications/\nfacilities layer.\nCommon data Data dictionary for messages.\ndictionary\nData Message encoding/decoding support\npresentation according to formal language being\nused (e.g., ASN.1); supports the basic\nfunctionality of the OSI presentation\nlayer.\nCommunication Addressing Select addressing mode for messages\nSupport mode transmission\nCongestion Facilities layer decentralized\ncontrol congestion control functionalities.”

“TABLE 3\nDomain Facilities\nClassification Facility name Description\nApplication DEN basic Support the protocol processing\nSupport service of the Decentralized Environmental\nNotification Message\nCA basic Support the protocol processing\nservice of the Cooperative Awareness\nMessage\nEFCD Aggregation of CAM/DENM data at\nthe road side ITS-S and provide\nto the central ITS-S\nBilling and Provide service access to billing\npayment and payment service provider\nSPAT basic Support the protocol processing\nservice of the Signal Phase and Timing\n(SPAT) Message\nTOPO basic Support the protocol processing\nservice of the Road Topology (TOPO)\nMessage\nIVS basic Support the protocol processing\nservice of the In Vehicle Signage (IVS)\nMessage\nCommunity Manage the user information of a\nservice user service community\nmanagement\nInformation Local Local Dynamic Map database and\nSupport dynamic map management of the database\nRSU Manage the RSUs from the central\nmanagement ITS-S and communication between\nand the central ITS-S and road side\ncommunication ITS.\nMap service Provide map matching functionality\nCommunication Session Support session establishment,\nSupport support maintenance and closure\nWeb service High layer protocol for web\nsupport connection, SOA application\nprotocol support\nMessaging Manage ITS services messages based\nsupport on message priority and client\nservices/use case requirements\nE2E Deal with the disseminating of\nGeocasting information to ITS vehicular and\npersonal ITS stations based on\ntheir presence in a specified\nGeographical area”

“In an example ITS implementation, messaging subsystem 307 or inter-object communication system 312 may provide DEN basic service(DEN-BS) and/or CA basic service(CA-BS), respectively. The mapping DB 320 could provide the LDM facility, while the map processing subsystem309 may be an ITS app residing in the application layers. The map processing subsystem 309 may be classified as a road safety and/or traffic efficiency application in this example. Aspects of the handshake subsystem 306 and/or object DB 330 could provide information about the station type/capabilities for this example ITS implementation.

The CA-BS contains the following entities for sending and receiving CAMs: an encode CAM entity and a decode CAM enterprise, a CAM transmit management entity, and a CAM receipt management entity. The DEN-BS is used to send and receive DENMs. It includes an encode DENM entity and a decode DENM entity. A DENM transmission management entity and a DENM receipt management entity. The protocol operation of the originating ITSS includes activation and termination, determination of CAM/DENM generation frequency and triggering generation CAMs/DENMs. The protocol operation of the receiving IT-S is implemented by the CAM/DENM reception manager entity. This includes activating the decode CAM/DENM entity at reception of CAMs/DENMs and provisioning received data to the LDM or facilities of the receiving I-S. Also, discarding invalid/unvalid CAMs/DENMs and verifying the information of the received CAMs/DENMs. The DENM KAF entity KAF stores a receipt DENM for its validity and forwards it when applicable. The usage conditions of the DENM KAF can be either defined by ITS application specifications or cross-layer functionality of an ITSC managing entity. The encode CAM/DENM entity creates (encodes), CAMs/DENMs to contain various, and the object list may include a listing of DEs and/or frames (DFs) included within the ITS data dictionary according to ETSI technical specification TS 102 894-2 version 1.3.1 (2018) August. This article is titled?Intelligent Transport System ITS Users and Application Requirements; Part 2: Applications layer common data dictionary.

“In embodiments the encode CAM/DENM entity creates (encodes), CAMs/DENMs that include different data such as the object list. The CAM/DENM encode entity could generate a CAM/DENM that includes a plurality records. Each record corresponds to an object 64 from the 64 objects detected by the sensors 262. Each record may contain one or more data elements, and one or more frames. A data type that only contains one data element (or a datum), while a DF contains data types that contain more than one DE within a predefined sequence. DEs and/or DFSs can be used to build facilities layer or applications layer messages, such as CAMs, DENMs and the like. The plurality of DEs may include an ACK DE and a position request DE. The CAM/DENM encode entity inserts the first value into the sensor DE of the at minimum one object 64. This value indicates that at least one object 64 is to be reported sensor data captured locally by that object 64. Or?1? ); inserts a second value in the sensor request DE for records for other objects. The second value indicates that other objects have not reported the second sensor data. Or?0 ); inserts a third number into the position request DE for records of one or several detected objects that are to be reported a current location (e.g.,?TRUE). Or?1? ?1? 0? or FALSE? The decode CAM/DENM entity receives CAMs/DENMs. According to different embodiments, decode CAM/DENM entity in a roadside ITS?S decodes CAMs/DENMs starting at 64. This allows for the ACKs, position data and/or sensor data to be obtained from the 64 objects. These embodiments use sensor data from CAMs/DENMs to represent a physical area that corresponds to an occlusion.

“As we have already mentioned, the station type/capabilities facilities may be used to access aspects of object DB 330 or the handshake unit 306. The ITS station type/capabilities area provides information that describes a profile for an ITS-S. This information can be used in applications and facilities layers. This profile indicates the type of ITS?S (e.g. vehicle ITS?S, roadside ITS?S, personal or central ITS?S), and a role for the ITS?S (e.g. operation status of an emergency vehicle or other prioritized vehicle status, status of dangerous goods transporting vehicles, etc. ), and the status of detection capabilities (e.g., ITS-S’s position capabilities, sensing abilities, etc.). The object DB 330 contains the object capabilities and station type information for this ITS implementation. It also stores the unique ID that was assigned to each object 64 during the handshake procedure.

“Aspects of the sensor interface subsystem (310), the inter-object communications subsystem 312, or the remote communication subsystem 314 could be included in the N&T and accessibility layers, as mentioned earlier. The N&T layer is responsible for the functionality of the OSI transport layer layer and OSI network layer. It includes one or two transport protocols and network and transport layer management. These protocols could include the Internet Protocol Version 4 (IPv4) and Internet Protocol Version 6 (IPv6). They also include the GeoNetworking protocol. This protocol includes IPv6 networking with mobility, IPv6 over GeoNetworking as well as the CALM FAST protocol. The IPv6 network protocol includes methods that allow interoperability to legacy IPv4 systems. UDP/TCP, one, two, or more dedicated ITSC transport protocol protocols, as well as any other appropriate transport protocol, could be included. Each networking protocol may be linked to a specific transport protocol. Access layers include a physical layer (PHY), connecting physically to the communications medium, and a data layer (DLL), which can be sub-divided into a medium control sub-layer, (MAC), managing access to the communication media, and a logical control sub-layer, (LLC), management adapt entity (MAE), to manage the DLL and PHY, and a security adapt entity (SAE), to provide security services to the access layer. External communication interfaces (CIs), and internal CIs may be included in the access layer. The CIs are instantiations a specific access technology or protocol, such as ITS G5, DSRC WiFi, GPRS 5G, UMTS, WiFi, GPRS 5G, Ethernet, Bluetooth or any other protocol mentioned herein. The functionality of one or more of the logical channels (LCHs) is provided by the CIs. The standard for the access technology used to map LCHs onto physical channels determines the mapping.

Referring to FIG. “Referring back to FIG. Hardware implementations can include circuitry or companion silicon, such as CPLDs, FPGAs and PLAs. They may also include programmable SoCs that are programmed with operational logic. Fixed-functionality hardware hardware using circuit technology like ASIC, CMOS or TTL technology. Implementations of software may include components as shown in FIG. 3. These components are autonomous software agents or artificial intelligence (AI), agents that have been developed using suitable programming languages, development tools/environments, and are executed by one to several processors or individual accelerators. They are configured with the appropriate bit stream(s), logic blocks, and are able to perform their respective functions. Software implementations can also include instructions in instruction set architectures (ISA), which are supported by target processors. Or any of a variety of high-level programming languages that can be compiled to instruction of the ISA. Hardware implementations can include either software or hardware. In particular, if controller 302 or one of its subsystems (RTMS 300) includes at least one trained neural network for performing their respective assessments and/or determinations, at least a portion (e.g., an FPGA with an appropriate bitstream) of main system controller 302. The (trained neural networks) may include a multilayer feedforward neural net (FNN), Convolution Neural Networks (CNN), Recurrent Neural Networks (RNN), or any other suitable neural network. The FIG. will provide a more detailed description of an example hardware computing platform for the infrastructure equipment 61. 7.”

“FIGS. The following examples show processes 400-600 in accordance to various embodiments. The operations of 400-600 are described, for illustration purposes, as being performed by different subsystems of infrastructure equipment 61 or the RTMS 300. 3. FIGS. 4-6 illustrate specific examples and order of operations. FIGS. 4-6 illustrate some examples and orders of operations. However, these depicted operations should not be taken to limit the scope or meaning of the embodiments. The operations shown may be rearranged, broken down into additional operations, combined and/or omitted entirely, while still remaining within the scope and spirit of the disclosure.

“FIG. “FIG. Based on sensor data from individual sensors 262 and sensor data associated with objects 64, the map generation process 400 can be used to create real-time dynamic maps of objects 64. When occlusions are detected, Process 400 can be used to fill in the perception gaps and augment dynamic maps. This is a manual process. Process 400 starts at operation 402, where infrastructure equipment 61 (or the map processing subsystem 309) calculates or determines a gridbased environment model of coverage area 63, which is then overlaid on the observed coverage area area 63 (e.g. a road). Grid cells can be either 2D or 3-D. Each grid cell can be the same size, and may be defined using absolute GNSS coordinates. Opera 402 may include operation 424, where infrastructure equipment 61 or map processing subsystem 309 determines the map grid boundaries for the environmental model. Opera 426 is where infrastructure equipment 61, or map processing subsystem 309, stores the grid boundaries and the defined environmental model in the mapping DB 320.

“At operation.404, infrastructure equipment 61 (or the handshake subsystem 306) performs an intial handshake procedure (e.g. handshake procedure 500 of FIG. 5) and objects 64, as objects 64 enter the communication area 63. Operation 404 may include operation 428 in which the infrastructure equipment (or handshake system 306) assigns an object ID 64 to the objects, and operation 430 in which the infrastructure equipment (or handshake system 306) requests technical capabilities for the moving objects 64. These technical capabilities may be obtained and stored in object DB 330 (not illustrated in FIG. 4.) to determine whether or not a specific object 64 is able to participate in real-time mapping services, and/or fill in perception gaps.

“At operation 406, infrastructure equipment 61 (or object detection 305) continuously tracks and records the positions of detected objects 64, using sensor data from fixed infrastructure sensors 262. The sensors data are used to determine the position, speed, direction (heading), or other properties of moving objects 64. The map processing subsystem 309 makes the properties of 64 objects under observation available as a concise and timely-synced mapping 324. This map is transmitted back to 64 tracked objects to aid navigation, CA/AD and/or other applications 64 have implemented. The objects 64 can include a way to at least coarsely identify themselves and communicate their positions in a protocol for secure initial identification. This could include, for instance, DGNSS positioning systems and GNSS systems, LTE location service, or any other electronic component that can determine the location or position 64.

“At operation 408 the infrastructure equipment (or the object detector 305) detects 64 objects that are traveling towards (or predicted to travel at) a region within the coverage area 63. Additional data may be needed for this area or will be in the near future. Opera 408 may include operation 432 in which the infrastructure equipment (or object detector 305) detects grid cells with obstructions and/or operation 434, where the infrastructure apparatus 61 (or object detector 305) selects grid cell for verification. These embodiments include operation 432 and 434 where grid cells have been selected. The infrastructure equipment 61 or the object detector 305 then selects or picks one or more objects 64 that are entering (or predicted to enter), the selected grid cells. Operation 410 is when the infrastructure equipment (or object detector 305) determines if all cells are covered in the environmental model. If not, the infrastructure machinery 61 (or object detector 305) loops back and performs operation 408.

If the infrastructure equipment (or object detector 305) determines all cells are covered then the infrastructure equipment (or map processing subsystem 309) performs operation 412, to augment the environmental data to be broadcast or multicast to objects 64. Operation 412 may include operation 438 in which the infrastructure equipment (or map processing system 309) iterates through a list 64 of tracked objects (e.g. ID records 331 in FIG. (3) for the unique IDs assigned the objects 64 in operation 408. Opera 440 may include operation 438 in which the infrastructure equipment (or messaging subsystem 307) adds flags to the selected objects 64. The messaging subsystem 307 can augment the payload data for real-time dynamic maps messages 64 for identified moving objects with flags or des to indicate send information requests to the selected objects 64 to transmit a receipt ACK, sensor and position data. To determine or achieve statistical confidence communication cell coverage, several objects 64 may be asked for receipt ACKs. To fill in perceived gaps in the overall 324 map generated by the map processing system 309, sensor data and position data will be requested. These flags and DEs can be used in future broadcast/multicast packages, or PDUs. The infrastructure equipment 61 will need feedback information from selected objects 64. The infrastructure equipment 61 or messaging subsystem 309 can switch object 64 responses on/off for specific grid cells. Operation 414 is when the infrastructure equipment (or messaging subsystem 307) broadcasts messages, including real-time dynamic segment data, to all 64 tracked objects, including those 64 that were selected for the send information request.

“At operation 416 the infrastructure equipment (or the inter object communication subsystem 312) continuously monitors the selected objects 64 for any response messages. The reply messages can be sent separately over a point-to-point connection by using suitable wireless/V2X communication technology. This can include multicast or broadcast transmissions. Some embodiments allow other objects 64 to listen for multicast or broadcast response messages and then compare these data with their own observations. They can also annotate their mapping data based upon the comparison.

“At operation 418 the infrastructure equipment (or main system controllers 202 and/or map processing subsystem 309) processes data received from selected objects 64. A request for an ACK reply message is included in the send information request. This could indicate poor wireless coverage, such as a cell coverage hole or network overload. The map processing subsystem 309 checks for consistency when the send information request contains a request to obtain additional sensor data. It then fuses the received sensor and sensor data from the infrastructure sensors 262. Trust mechanisms can be used in some cases to verify or determine the reliability of the received sensor data. For example, “sanity checks” could be used. This can be done by comparing sensor data from several objects 64 located at or near cells selected at operation 434, and sensor data taken by sensors 262 of the same cells. Another example is out-of-band authentication and safety measures such as safety islands, TXE, or TPM elements within the objects 64 may be used.

“FIG. “FIG. The handshake procedure 500 could correspond to operation 404, 428 and 430 in process 400, as shown by FIG. 4. The handshake procedure 500 between object 64, infrastructure equipment 61, facilitates object annotation. Objects 64 that enter a coverage area 63 are assigned a unique ID. This allows them to inform infrastructure equipment 61 about their object capabilities in wireless communication, self-localization and environment sensing. An example of this is the initial handshake procedure 500 between object 64 and infrastructure equipment 61. This may occur when object 64 passes by sensors 262 for a first time (e.g. when entering an intersection or onto a highway 63). If object 64 is temporarily occluded, the handshake procedure 500 may be repeated later. The sensors 262 of infrastructure equipment 61 will detect the object again.

“Process 500 starts at operation 502, where the infrastructure equipment (or the object detector 305) detects the object 64 and identifies it or makes it known to the infrastructure equipment 61. The infrastructure equipment 61 or the handshake subsystem 306 generates an object ID 64 at operation 504. The unique ID can be used again if the object 64 is removed from the coverage area 63. Unique IDs may not be required to be unique within the scope of infrastructure equipment 61, or in a portion thereof. The infrastructure equipment 61 may provide an optional service to improve the object 64’s positioning information (e.g., using differential global navigation satellite systems (DGNSS), at operation 506). Operation 508 is when the object 64 initiates a procedure (?init(?))). The object 64 initiates an operation (?init( )?) with the infrastructure apparatus 61. Using a suitable V2X communication technique 250, the object 64 establishes connection to the infrastructure equipment 61.1 and sends a request package announcing its intention to use maps data. Operation 510: The infrastructure equipment 61 (or subsystem 306) provides time sync (?TimeSync?) message to the object 64. The infrastructure equipment 61 (or the handshake subsystem 306) provides a time synchronization (?TimeSync?) This allows object 64 to sync its local time with that of infrastructure equipment 61. Operation 512: The object 64 transmits its current position, timestamp and time back to the infrastructure equipment (or the handshake system 306).

“At operation 514, the infrastructure equipment 61 (or handshake subsystem 306) sends an inquiry for the technical capabilities of the tracked object 64. At operation 516, object 64 sends a message indicating or including its technical capabilities. A list of message types, along with details about the content of each message sent back to object 64, can be used to define technical capabilities. This allows infrastructure equipment 64, using simple triggers such as single-bit flags, to send information requests back to object 64. As long as the object 64 is within the communication range of infrastructure equipment 61, the infrastructure keeps the relationships between the object’s unique ID 64 and the message types. The infrastructure equipment 61 (or subsystem 306) transmits the unique ID to the object 64 in order to complete the handshake procedure 500 at operation 518. After the handshake procedure 500, stationary sensor 262 is used to track the object 64. They also update their position continuously while it travels within the coverage area of 63.

“In some embodiments, handshake procedure 500 can be repeated repeatedly or after a set number of infrastructure nodes have been passed to calibrate tracking capabilities by the infrastructure sensor 305 and/or object detector 305. Privacy concerns can be addressed in some embodiments by randomly changing the unique IDs between handshakes. The environmental map 324 may be updated using the combination of the location information from object 64 and consecutive object tracking using sensor data 262. This is possible following the handshake procedure 500. The object 64 may provide sensor data to update the environmental map 324. Some embodiments allow for consecutive transmission of map data 324, which may be done as one-way communication from the infrastructure equipment 64 to the objects 64. This will reduce or minimize latency in wireless information exchange. Some embodiments may use universally unique identifiers (UUIDs) to assign unique IDs. This could include suitable random number generators, hash function, and the like. Some embodiments can manage unique IDs using object tracking techniques, such as the ones discussed above. Some object tracking methods may use IDs for tracking objects in video or other sensor data. One embodiment may use a buffer or table to store the IDs of the object being tracked and the unique IDs broadcast in the map broadcast.

“FIG. “FIG. 6” shows an object selection procedure 600 to select objects 64 for sending information requests. This is in accordance with different embodiments. The object selection process 600 can be used in embodiments to identify subsections within the coverage area 63 that require additional information. You can also use the object selection process 600 to choose individual objects 64 from a 64-object list. The infrastructure equipment 61 can use the process 600 to enhance environmental perception. For example, it augments dynamic maps by incorporating sensor readings from tracked objects 64. The process 600 also allows the infrastructure equipment to verify that messages are being received by the objects using multicast or broadcast protocols.

Referring to FIG. “Referring to FIG. The reason for additional information may vary depending on whether it is to fill in perception gaps, or to receive acknowledgement messages. The map processing subsystem 309 detects obstructions to the fixed sensors 262 and other reasons that reduce environmental map 324’s completeness. It then selects sections or regions (e.g. grid cells) that correspond with the area that was occluded at operation 602. For verifying the reception of broadcasted/multicast messages (e.g., receiving ACKs), the main system controller 302, object detector 305, or some other subsystem of the RTMS 300 selects test sections or regions (e.g., grid cells) at random, using a round robin scheduling mechanism, based on a user/operator selection, or based on the observed or simulated network traffic. In some embodiments, a network management (NM) entity may instruct the infrastructure equipment 61 to perform the verification for self-optimizing/organizing network (SON) functions.”

“At operation 604, a map processing subsystem 309 and the main system controller 302, object detector 305 or another subsystem of RTMS 300 determine the sub-sections of the coverage area 63 that require additional information. The subsystem can determine the boundaries of sub-sections and regions, depending on the map segments 325 and/or overall map 324 stored in the mapping DB 320.

“At operation 606, an object detector 305 generates or creates a list 64 of objects passing through (or predicted to travel at or close) the selected sections (??L1?). FIG. FIG. 6), which could be based upon ID records 331 from object DB 330. This list L1 could be stored in a suitable structure in object DB 330. The object detector 305 generates or creates 64 objects from L1 in operation 608 that have the capabilities to provide the requested information (?L2?) FIG. FIG. 6), which could be based upon capabilities records 332 from object DB 330. This list L2 could be stored in a suitable structure in object DB 330. Operation 610: The object detector 305 generates or creates a sorted listing of timestamps, object IDs, and sends information requests to each object 64 in L2 (?L3?) FIG. FIG. 6), which could be based upon stored object data records 333, obtained from object DB 330. These records may indicate velocity/speed, position and direction of the objects 64. This list L3 can be stored in a suitable structure in object DB 330. A list of timestamps may also be based upon the speed, position, direction/heading for each object 64 in L2 as detected using sensor data from sensors 262 and/or a known communication delay.

“At operation 612, the inter-object communication subsystem 312 starts broadcasting/multicasting the send information requests at each determined timestamp in list L3. Opera 614 is when the inter-object subsystem 312 determines if there’s still timestamps on the list L3, and operation 612 involves the inter-object subsystem 312 looping back in order to perform operation 612 if they are. If at operation 616 the inter-object communication subsystem 312 determines that there are no more timestamps in the list L3, then the inter-object communication subsystem 312 proceeds to operation 616, which involves the inter-object communication subsystem 312 repeatedly broadcasting/multicasting the send information requests as long as each object 64 in L2 is at or near the selected section or region. At operation 616, the interobject communication subsystem 312 determines if any 64 objects in L2 are located in selected sections or areas. If so, the interobject communication subsystem 312 loops back to perform operation 612. If the inter-object communications subsystem 312 finds that none of the 64 objects in L2 are found at or within the selected sections or areas, then 600 may be completed or repeated as needed.

“II. “II.

“FIG. 7. illustrates an example infrastructure equipment 700 according to various embodiments. The infrastructure equipment 700 (or the?system 700?) may be implemented. The infrastructure equipment 700 (or?system 700?) may be used as described in FIGS. 1 through 6, a base station and radio head, RAN node, such as FIG. 256, or a RAN node. 2, server(s), 260, or any other element/device mentioned herein. The system 700 could also be implemented by a UE.

Click here to view the patent on Google Patents.