Internet – Gary M. Zalewski, Albert S. Penilla

Abstract for “Machine learning methods, systems and methods for managing retail store processes that involve the automatic gathering items”

“Devices and systems are provided to process requests for items to pre-collect from a store. The processing of these requests is done using one or more processing entities. One method is to receive tracking data from a portable device that is associated with an account created online by the user. A shopping list associated with an online store account is used to identify one or more items. The tracking data is processed to determine the current route of the device to the store. To create an order for pre-gathering one item or more from the store, instructions are sent. When the device’s current route is confirmed, the instructions are sent. This method also involves receiving an indication that one or more items have been gathered. This method sends a notification to the user’s online account that the ordered package has been gathered and is ready for pick-up at the store.

Background for “Machine learning methods, systems and methods for managing retail store processes that involve the automatic gathering items”

“Over the years there have been many advances in the field of processing devices and devices which communicate over networks. Electronic devices, for example, are usually designed for specific purposes. Some devices, like the smartphone and general-purpose computers, are more versatile. Although these devices are versatile and powerful, they require network connections to send or receive data. Internet service providers (ISPs) provide network connections. These connections are available for private use (e.g. homes or businesses) and can also be obtained in public areas. Before access can be granted, users must establish connections via either their devices or interfaces.

“Devices that have network access to exchange data need to be able to manage setups and provide adequate power to process the data. Unfortunately, these requirements can be a hindrance to simplifying devices that could gain from data exchanges over the internet. This is why embodiments in this disclosure are needed.

“The inventions described herein concern communications devices, communication methods and methods for using digital detection, sensing, and tracking systems to determine interactions between retail items in order to facilitate cashier-less transactions. One embodiment provides a method to track items in a shop and detect interactions using one or more sensors. To determine whether the user intends to buy the item, the interactions can be classified and identified. A smart retail outlet can classify shopping activity for one or several shoppers into one or multiple categories. If a user is determined to fit in one of these categories or to change category, an action can then be taken to respond dynamically to the status of the shopper. The store can track the items taken into its possession by the shopper and detect any returns to the shelves. It can also identify the misplaced items on the wrong shelves. One embodiment uses machine learning algorithms to predict the shopping list of a customer. One configuration allows a retail outlet to use a combination of input data and machine learning models to classify shopping behavior. This allows a family or group to take and return items to one shopping account. The present invention also allows for the identification of each shopper who takes or returns items from the shelf, even if multiple shoppers are nearby and interfacing with the shelf simultaneously.

“In some embodiments, processing tracking data includes accessing deep learning models using feature input of at most user location and shopping histories. The feature input includes time-based actions characterizing past user shopping history and global positioning data that the user has enabled to enable identification of the user’s location based on the location data received from their portable device.

“In some embodiments, a portable device’s current route is a predicted or actual path that the user is expected or is expected to take from one location to another location. The predicted path is calculated using either global positioning system (GPS), dead reckoning, or GPS and machine-learning to classify that the portable device’s current route is still heading to the store.

“In some embodiments, instructions for creating the task to pre-gather the items include an identification of the items from the shopping lists and a time frame within which the user must enter the store or when the user will receive one of the items.

“Some embodiments of sending instructions trigger based on (A), a calculation of expected time needed to gather one or several of the items and (B), a forecasted future date in which the user will either arrive in the store or need the one or multiple items ready for pickup.”

“In some embodiments, if the current route is based on processed global positioning system data (GPS) of the portable device and if at least one processing entity associated to the store has not received data that indicates that the user isn’t planning to visit, not likely to visit, or plans to visit the shop at a different time, the user will remain headed to it.”

“In some embodiments, an indicator is a payload or signal on the portable device of the user that is associated to the user account or output on an instore device or output from the store that indicates that one or more items have been ready for pickup.”

“In some embodiments, an indicator is a state that has been received by at least one said processing entity in connection to a status and a description one or more of the items that have been gathered using the shopping list.”

“In some embodiments, an indicator is an actual receipt for the pre-gathered items.”

“In some cases, items available for pickup can be identified when the user enters or leaves the store after arriving at the store.

“In some embodiments, items ready for pickup can be assigned to the user immediately upon entry to the store or after arriving at the store or before exiting the shop.”

“In some embodiments, pre-collected items are items pre-designed on the user?s shopping lists or a subset thereof. They are items the user can purchase to analyze the user?s shopping history, user shopping history, or data from at minimum one processing entity that is capable of keeping track or consumption of items associated with him.”

“In some embodiments, an item which is already gathered or is scheduled to be gathered is removed from the shopping cart or marked on the shopping basket as being gathered.”

“In some embodiments, based upon receipt of data from the mobile device of the user or an associated processing entity, the user remains heading to the shop. This confirms or predicts that the user is still headed to or intends to go to the store.”

“In some embodiments at least one or several of the identified items or items on the shopping lists are automatically identified using a deep learning model that predicts whether one or more items will be purchased by the user.”

“In some embodiments, the store has tracking sensors. These tracking sensors are in communication with at minimum one of the processing entities to monitor interaction data between the user with one or more items in store. If the processing entity determines that the user has taken the item that wasn’t already pre-gathered by the user, it sends a guidance message either to the portable device or to a speaker in the store or to a display or other store assets proximate to the user.

“In some embodiments, the store has tracking sensors. The tracking sensors are in communication with at most one of the processing entities to monitor a user’s location in the shop. At least one processing entity sends a wireless payload to a store clerk to indicate a user’s current location in the store. In this way, the package of one or more items ordered and gathered by the clerk is delivered to the user at the user’s current location in the stores.

“In some embodiments, the store can be further configured with tracking sensor, said tracking sensors being in communication to at least one said processing entity for monitoring the location of a user in store, wherein said processing entity provides the location of the customer in store to a robot. The robot has coupled thereto a package containing said one or more items ordered from the store and collected, where the robot delivers the items at the current location of user in store.”

“In some embodiments, the store has tracking sensors. These sensors are in communication with at most one of the processing entities to monitor a user’s changing location inside the store and to process interaction data as input features into a machine learning model. The processing entities can also access the user’s account to allow the user to select additional items. Selection of items is based in part upon the user’s path through the store.

“In some instances, the selection of items that are pre-gathered is based upon the user’s path through the store, the time the user allocates to the store, the preference of the user or a promotion for the store.”

“In some embodiments, the store can be further configured with tracking sensor, said tracking sensors being in communication to at least one said processing entity for monitoring a changing position of the customer inside the store and providing navigation assistance to an asset or portable device of that user. The guidance assists the user in traversing the store along a navigation route through isles that contain one item or more on the shopping list. One of said processing entities may further access the user account to select additional items, the selection being based in part upon the user’s movement within the store, location, on- or on a certain amount of time or the other users, as well as well as well a user’s or the store user or the location of others, or on the user or the user or the user or the user or the user or the user or the shop,

“In some embodiments, each one or more processing entities are defined by a computer or server or Lambda function processes, a Lambda function or function or process or service or container, container, container process, cloud processing system or smartphone.

“Wherein each of the said processing entities is coupled or interconnected with a network that allows access to storage, data, and instructions.”

“In some embodiments, confirmation that the current route is of the portable device remains heading to the shop includes receiving information from user account as provided by user that the user is currently headed towards the store or is heading there at a specific time.”

“In one embodiment, a process is provided to process requests for items that are pre-collected from a store. One or more processing entities are used to process the requests. One way is to receive tracking data from a portable device that is associated with an online account. A shopping list associated with an online store account is used to identify one or more items. The tracking data is processed to determine the current route of the device to the store. To create an order for pre-gathering one item or more from the store, instructions are sent. When the device’s current route is confirmed, the instructions are sent. This method also involves receiving an indication that one or more items have been gathered. This method sends a notification to the user’s online account that the ordered package has been gathered and is ready for pick-up at the store.

“One embodiment of a method involves identifying a shopper at a store using sensors, and associating that shopper with a shopping account. One or more sensors are used to monitor the shopper at the store. The output of these sensors is input to one or several deep learning models, which generate classification data for a given scenario. This method involves receiving natural language voice input from the shopper that contains words. The voice input is processed along with the data classification to produce a response to the words said by the shopper. The context in which the response is given is important.

“In some embodiments the one- or more sensors used to monitor the shopper are linked with the store and coordinated in order to identify the shopper’s movement while in the shop and relative movement of the customer relative to one of the items in the store. The scenario is associated to actions taken by the shopper with respect to said items and said movements by the one/more sensors. This response is within the context of the scenario.”

“In some embodiments the one or several sensors used to monitor the shopper can be linked with the store and coordinated in order to identify interaction of said shopper with said item or items. The scenario is associated to said interactions by the shoppers in relation to one of said items and said response is within the context of interactions that relate to this scenario.”

“In some embodiments, dynamically changing the response to a shopper’s interactions with said one or more items is possible. This is because shopper movement around the store places them closer or further to specific items.

“In some embodiments said reply includes guidance information that directs the shopper to move toward a particular item or group of items. In some cases beam formation is used in a capture audio.”

“In some embodiments the guidance includes one or more directions to move a shopper towards a direction or directions that the shopper should follow to move to another area within the store where the item or group of items is located.”

“In some instances, the classification data contains an indication that the shopper took an item from a store shelf or returned it to the shelf.

“In some embodiments, the devices include a wireless communication chip as well as an integrated power generating device or delivering device. These devices are called wireless coded communication (WCC), in one embodiment. These devices can harness power to enable or cause activation of a communication system to transmit data. Data can be pre-configured to log events, state and cause actions, send messages, or request data from end nodes. Some embodiments enable wireless communication, which allows access to the Internet. It also allows cloud processing of data received, as well as processing data that is returned or transmitted.

A WCC device, as it is commonly known, is one that is equipped with wireless transmission capabilities (e.g., a transmitter or transceiver, Wi Fi chip, Bluetooth chip and radio communication chip). A power supply or power pump is also required.

“In one embodiment, the method is for tracking items in a physical shop for cashier-less purchase transactions. The method involves detecting a wireless coded communication (WCC), portable device, in the physical shop. A WCC device can be associated with an online account of a customer. A server then receives sensor data about the WCC device’s location in the physical shop and its proximity to other items. This involves the server receiving interaction data from an item on the shelf of the physical shop by the shopper, using one or more sensors in the physical store and the WCC to determine if the item is one that can be purchased. The interaction data can be used to identify the type of item and to add it to the electronic shopping cart. This method also includes the processing of an electronic charge by the server to a payment service used by the shopper for said item. WCC devices can also be used as portable devices. The WCC device, or any portable device, can run an application for cashier-less transactions.

A power pump device can be configured in one configuration to receive force or movement inputs from an object or user. A force can be an intentional input from the user. For example, pushing a button or moving a slider. An indirect force is another type of force that the user does not intend to use. Force is created when the user moves or lifts, shifts or otherwise changes the orientation or position of a physical object.

“A physical object can be, for example, a door. The intent of the user is to close the door. Some embodiments may use a physical object such as a shelf or a retail item. In one embodiment, the indirect force of closing a door is transferred to a WCC unit. The WCC device was able to receive the force even though the user didn’t intend to input it. The input could be received indirectly by a WCC device that is powered by a battery. This means that the input to the WCC device can also be received indirectly. This input is used to activate the WCC device’s process and to transmit or request data wirelessly via a network.

“One embodiment provides a method of tracking items in a shop for cashier-less purchase transactions. This method involves receiving sensor data about items on shelves in a store and receiving interaction data from a user regarding an item on a shelf in the store. If a take event has been confirmed, the interaction data can be used to identify the type of the item and add it into an electronic shopping basket of a user. This account is used to process the cashier-less transactions. This embodiment involves the server receiving data that indicates that said user left an area of the shop and that it is evidence that the user purchased the item. After confirming that the user has left the store, the server can process or instruct the payment of an electronic charge to the user’s account.

“In some embodiments, the interaction data is obtained by tracking the volume or area where the item is located. This interaction data can be used to track movement of the item, determine if it has been removed from the shelf, and to add the item to an electronic shopping cart.

“In some embodiments, said movements are identified at minimum using image data from a camcorder. The camera can produce image data which is part of the sensor data received by the server.

“In some embodiments, the confidence level in identifying said movement is increased using input from one or several other sensors. Each sensor is given a weighting that assigns a particular sensor more or less significance in said identification of the movement.”

“In certain embodiments, the method also includes the receipt by a server of data indicative that a user device is present in the store. This data is used to identify the user account for the electronic shopping cart that allows the user to purchase one or more items in store using one of the cashier-less transactions.

“In certain embodiments, the method also includes the receipt by a server of tracking data regarding the user’s movement in the store. The movement of the users is indicative of browsing said items on one or several shelves. Tracking includes tracking the hand of the customer, which is the interaction data that the user has with the item.

“In some embodiments the method also includes sending by the server display information to render on a display screen that’s proximate the shelf housing the item the user is interfacing with.” Based on the profile information of the user, the display information is tailored for the user.

“In some embodiments the method also includes sending audio data by the server from a region in the store that’s proximate the shelf housing the item. The audio data is customized for the user based upon information from the profile associated with the user account.

“In some embodiments, audio data can be output in a direction tracking format. The directional track format implements audio steering to direct audio data towards a user’s head. This audio steering concentrates sound for the user and excludes distributions of sound substantially beyond their head.

“Some embodiments of the method include receiving from the server eye or head gaz information about the user. The user’s actions with an item are indicated by their eye or head gaz information. To determine the product information that interests you, the eye and head gaze information are collected. The server collects product information relevant to the user. It then uses this information to make recommendations for other items and/or offer discounts or promotional information.

“In some embodiments, this method also includes receiving data from the server that indicates that there was a take event for the item. Second data indicates that the tag of the item communicates with the user device. The user’s device verifies that the item was taken into consideration and not returned before the electronic charge is processed. The store has at least one sensor that can monitor when the user is leaving the store.

“In some embodiments, this method also includes receiving proximity data from the user and a third user by the server. The user and the second users are configured to be close to each other near the shelf. It is possible to distinguish between the user or the second user using the proximity data. Cameras are placed in the store to generate part of the sensor data. The cameras produce image data that can be tracked to identify the limbs of each user and the second person. This data can then be used to track the movement of said limbs and verify that the user is actually interacting with the item.

“In some embodiments, interaction data contains one or more features that describe the interaction. These features are used as inputs into a model to classify the interaction as one of a take or a return event of the item from shelf or back to shelf.

“In some embodiments, classification is provided. This uses the model to assist in predicting the actions of the user or other users. This model refines its classification based on previous interactions. It then assists in deciding whether or not to add the item or additional items to the electronic shopping basket of the user or another user.

“In some embodiments, classifying uses the model in order to generate recommendations to the user (or other users) based on learned probable actions taken by users when they interact with the item(s). The continuous learning process uses inputs from the store and/or other shops that are connected to a shopping system that allows for cashier-less transactions.

“In one embodiment, the invention provides a method to track items in a store and process a cashierless purchase transaction. This method collects sensor data about items on shelves in a store. This method can also be used to identify the user who enters the store. This method requires at least one device that has wireless communication. An application is installed on the device and associated with a user account. Tracking the user’s movements in the store is also possible. Tracked movements include the detection of proximity to shelves containing an item by the user. The interaction data of the user with the item is also detected. This interaction data can be used to identify the type of the item and allow you to add it to your electronic shopping cart. This method also includes receiving data that indicates when the user leaves an area of the shop while the item is in electronic shopping cart. The user is expected to leave the area if they intend to buy the item. One or more sensors are used to confirm that the user has left the designated area. This method involves the processing of an electronic charge to a payment system associated with the user account for the item based upon said confirming.”

“If the WCC device uses power pumps, the force, pressure, or movement received can then be transmitted or imparted to a mechanically flexible element or device. One embodiment may use a flexible element that is a piezoelectric. A piezoelectric is a device that produces a voltage by being subject to force or movement. The result is a voltage. The voltage can be stored in a storage device or harvested. One embodiment of the storage device could be a capacitive storage device (e.g. a capacitor). Other embodiments allow the storage device to be either a battery, or a cell or combination of cells that can recharge. If the storage device is capacitive, it will be used as a power source for an integrated circuit of the WCC.

“In one embodiment, an integrated circuit can communicate data, a code or message to an end-node. The received data can be stored by the end node, or an action performed, e.g. based on preprogramming.

“In one embodiment, a wireless coded communications (WCC) device is used. In one configuration, the WCC device is passive in that it is not connected to any active power, such as a battery or power cord. The WCC device can be activated for a time when a force, such as a mechanical force, is applied to its power pump. This allows it to transmit and process coded data to the appropriate end point or end node. A WCC device can be programmed with multiple functions or methods in some embodiments. A pre-activation setting can help you choose the right method or function. Other times, the method will be selected automatically based on available power. Methods that use more power will be performed if there is more power. A payload may be generated by certain embodiments if an input feature, such as a slider, switch, slider or selection control, is coupled to the WCC. The payload can be sent remotely to an end node, processed locally by the WCC, or both.

In some cases, functions can be selected and triggered based on the results of the processing of the payload prior, during, or after transmission. The WCC can detect or image an identity or attribute of a user. This includes a biometric signature and fingerprint. It also allows for the detection or imaging of a scene, object position, QR code or RFID code. Status, temperature, pressure, presence or absence of conditions, vibration, and any signal, source, that is coupled to, near, or within the sensing range of one sensor coupled to or integrated with a WCC device.

“In other embodiments devices, systems, or methods are provided. A wireless coded communication device (WCC), which can be used to communicate wirelessly with other devices, such as over a network, is one example of such a device. A WCC is a type of internet of things device (JOT). It can sense, process, send, respond to, and exchange data with WCC devices, network devices, user devices, and/or other systems over the internet. A WCC device can include a power source to enable low-power usage, such as sending data, asking for data, and/or communicating wirelessly. WCC devices can be used as standalone devices, or integrated into other devices. A WCC device can include power harvesting circuitry in some configurations. WCC devices can be pre-configured to log events, state and cause actions, send messages, or request data from end nodes. Some configurations enable wireless communication, which allows access to the Internet. It also allows cloud processing of data received, as well as processing data that is returned or transmitted.

Retail embodiments are stores or outlets that have a network of sensors to track user interactions with the items for sale. One or more cameras, motions sensors and ultrasonic sensors are used in some embodiments to track user interaction with items. Some embodiments include confirmation systems that allow you to know when an item has already been taken.

The item will be charged to the user’s account if the user walks out of the store and then moves to an area outside the store. An online account may be used by the user. The charging event could result in a debit to the credit card or payment account linked with the account. One embodiment of the retail store may have one or more local computing systems that can interface with the sensors or other systems used to track users or items, identify items, or interact with them. Some embodiments may link the local computer with cloud computing that connects processing with one or more systems, servers, and/or other systems. This allows for processing instructions to determine whether users interact with items in a shop, when they are considered take events, and when an item should be charged.

“The systems can also process historical interaction from the user or other users to determine whether items will be bought and/or removed from a shelf. A variety of learning processes can be performed by cloud and local computing to make predictions and make assumptions. This information can be useful information for the shopper. For example, it may provide real-time information about items. This information can be customized for each user. For example, custom display screens can be placed near items or goods. In some embodiments, audio can also be provided to the user to provide custom information about an item or good. Another embodiment allows tracking users and tracking items to share sensors (e.g. sensor fusion) to ensure that the determinations are true. Multiple sensors can be used, for example, to determine whether a user actually took an item off the shelf. There are many sensors that can be used to determine whether a user has actually picked up an item from the shelf. Some may be weight sensors while others may be image sensors (e.g. cameras) and others may be motion sensors. There are many sensors that can be fused. Some embodiments use weighting to determine which sensors are the most crucial for a particular interaction. Other embodiments use weighting to determine which sensors provide the most reliable data and which ones may be less important. Because of the different ways users interact with shelves, weighting can be dynamic.

“For instance, users might be different in height and size, and their methods of holding items can differ. Users may also be standing at different angles or heights relative to shelves. This could affect the way that things are stored and moved. Other situations may require multiple sensors to identify which user is actually using the item or good. This is important as it ensures that the item’s assignment and take-even are verified to the right user to avoid false positives. An audio indicator can be provided to users when an item is added into their virtual shopping basket. The audio could be provided by a speaker at the store, or from the user’s device, such as a smartwatch or phone or other wearable device, like glasses.

“In the various embodiments herein, some devices can be powered devices and others may harvest energy. Many examples will be given regarding wireless coded communications (WCC) devices that may act as sensors. The data may be captured by WCCs and sent to a network for processing.

“FIG. “FIG. 1A shows an example of a user buying in a store in accordance to one embodiment of this invention. Store shelves can be found in different locations within the store. A plurality of sensors are installed on store shelves and in areas around them. These sensors can generate sensor output that can then be processed to determine when a take occurred for an item that was stored on the shelf. The user is holding an item, while the item was previously placed on the shelf. The take event is a way to identify when the user removed or interacted on an item on the shelf. The user appears to be holding the item in his hands and reading the label. One configuration is available, which can be optional: in ID, code, RFID tag or WCC are integrated with the product.

This information can be found on the product’s label and used to track the product in store or when it leaves by one or more detectors. These codes can be used in one embodiment to track the time products leave the store using RFID sensors or other tracking methods. Another embodiment uses codes to identify when products are placed in a shopping cart or shopping basket. You can install detectors on the shopping basket or physical shopping cart that will identify which items are inside. This can be helpful when other shoppers have selected items from the store shelves. One embodiment allows the user to pair their electronic device to the physical shopping basket to track when items are added to or removed from the cart.

“In some embodiments, the interaction detected may be associated with the user touching, touching, or reading the product label. If the user returns the product back to the manufacturer, a return event will also be detected. A plurality of proximity sensors may be placed near different rows of shelves, as illustrated. Other embodiments allow proximity sensors to detect items in multiple rows. Further embodiments can include motion sensors that are located near the shelves or directed towards the shelves. These sensors will detect when an item is being moved or any interactions between the object and the user, such as the hand. To determine the time that specific items were removed, weight sensors may also be embedded into shelf locations. In some embodiments, weight sensors are provided at each place where items are placed. Other embodiments allow weight sensors to be placed in areas that have weight sensing capabilities for multiple items or products.

Further, the store floor can be fitted with different types of sensors. Sensors can include sensors for weight, speed, walking velocity, common routes taken during shopping events, historical motion patterns, and so forth. Floor sensors can also be used for determining the number of users present at locations (e.g. stores. Heat sensors can be used to detect when customers are within a certain range of detectors in the store. These sensors can be embedded into objects in the store, into the ceiling, into the floors, into shelves or other areas. Sensors can also be used for determining the time a user spends standing, walking, or interfacing with shelves or other items.

As illustrated above, there are many cameras that can be integrated into the store and placed within it. Side cameras are also available. These cameras can track the user’s movement, their proximity to shelves and identify when they interact with products in the store. You can place overhead cameras at different locations within the store to track the movement of customers. Overhead cameras can be used to determine the shape of the user’s body and detect if there is a limb that extends from it. This information can then be transferred into shelves.

This is useful to determine when a specific user is reaching for products and for other users that may be close by that user. User identification can be used to distinguish the person shopping with the shopping user from the one who is just accompanying them. It could be that a parent is shopping with their children, or that a friend is shopping with friends. Motion detection, skeletal and/or leg detection and tracking are useful tools to determine user’s actions, interactions, locations and movements. Some embodiments use depth sensing cameras to detect when the user’s hands touch specific areas or volumes, and/or to determine the user’s size relative to other users. The user may be shopping in the store and also have a smart-phone in his pocket.

“The user could also be wearing smart glasses or a smart watch. The user may log in to the application to shop at the store. This may allow the user to identify themselves to one or more sensors. Some embodiments allow the user to add items to their electronic shopping list when they shop for items in the store. Items that have been verified to have been removed from shelves can be added to the user’s electronic shopping list. You can also determine if the user leaves the store or moves out of an area, which will confirm that the item was purchased. Sensors can detect when the smartphone or other smart device has left the store and determine if the item was taken while it was on the users shopping cart. The electronic transaction is processed and the items are charged to the user’s account. This account may be linked to a payment service.

“In certain configurations, sensors or cameras that can identify faces or detect gazes can be installed throughout the store. This embodiment allows for users to be able to identify when they are looking at particular products and can provide information back to them that is relevant to those products. Audio output can be provided locally, for example. This can be directed to the user’s head so that they can listen. If the user is viewing a product that is food-related, the information can be sent back to him in real time to indicate whether the item is suitable for his needs. Other embodiments allow for augmented information to be presented to smart classes. This information may relate to the product being handled or interacted. Other embodiments use gaze detection and/or facial detection to detect where the user is looking in the store. This information can be used by the store’s processing logic to determine if certain areas are getting enough traffic.

This information can be used to optimize the placement of goods in the store. Some embodiments of the store may be equipped with automatic moving shelves that can adjust themselves according to user viewing patterns. This can help optimize sales. Other embodiments allow the store to provide information to facilitate the movement of shelves and items. This can help increase sales. The store owner can use browse information to give various details to optimize placement, stocking and/or removal of items from shelves. Processing can be used in some instances to determine when particular store shelves or levels are receiving more viewing. Based on the amount of interaction that users have with the products, this information can be used to give priority to certain products. This information can be shared with regional processing systems, which can then be shared with multiple stores within a chain.

“The multiple sensors that are scattered and integrated throughout the store may be collecting information as well as information about the user’s presence or interactions with specific items. Integration of machine learning systems is possible to gather information about the sensors and user interaction. This can be used to optimize data to prioritize or select certain sensors in order to make assumptions. For example, the assumptions can be used to determine when an item was removed from a shelf or when it has been returned to the shelf. Another embodiment uses the assumptions to determine the types of interactions that are taking place with particular items on shelves.

“FIG. “FIG. The various processing entities and sensors can produce different types data. They can also be optimized to track items in a specific environment such as a shop environment. This can help to identify and purchase the right items. It is important to understand that different outputs can be filtered or identified based on the type and importance of the interaction being tracked. Deep learning can be used to determine which outputs of the sensors should be chosen and which should be ignored depending on the type interaction. Machine learning can also be used to give more significance to certain sensors and decrease the significance of others, in order to increase the confidence level of the selection of the right combination of sensors that will identify specific interactions within a store.

FIG. “The various examples shown in FIG. 1B are only meant to be an example. You should understand that not all may be used, and that others that aren’t specifically identified can also be used. The outputs and communications can be processed via a network. This network can also be in communication to different types of compute entities. Computer entities may include local computing, which can include WCC devices, WCC groups, DLC devices and other processing entities. The network can also provide access to cloud computing, which can be used to communicate between user accounts of different users who are accessing applications that enable shopping in real-world physical shops.

With this in mind, we will now describe the following processes and functionality. The store can provide guidance to the customer, by tracking their position in the store and giving them information. These data can be sent to the smart device. Another embodiment of audio steering provides information back to the user. This allows the user to be tracked and followed as they move around the store.

Audio listening can be integrated into the features of the store. Audio listening can be used to identify when a user is near or proximate specific items. Audio listening can be used to identify when a user has any questions about the product. Customized responses can also be provided when the user’s present. Artificial intelligence agents may be used to respond to user questions and audio. Camera processing is used in some instances to capture images from various locations within the store. Images can be taken of items on shelves or the user walking through the store. There are many uses for image processing. Different types of cameras can be used to perform different types sensing. Motion processing can also take place, using the output of motion sensors. To determine when items are being touched in a store, motion sensors can be used. The motion sensors can also be used to track the movements of the user in the store. Other embodiments of the motion sensor can be used to track the movements of objects or items in the store, as well as multiple users who are passing through the store.

To determine when customers are looking at particular items in the store, “Eye Gaze Processing” can be used. This data can be used to give additional information or personalize information to the user based on predicted or detected interests. To determine the time that items were taken from store shelves, weight sensors can be used. Calculations can then be made to determine when items should be returned to store shelves. If the item is not in the designated place, the weight sensor can be used to indicate that the item has been misplaced. The store manager or store owner can be notified of the information regarding the lost item to help them identify the item and return it to its correct location.

There are many types of sensors that can be used to track the user. The tracking of the user can include movement of items within the store, purchase of multiple items in the store, and movement of shopping carts. There are also sensors that detect exit or entry to different areas of the store. Gesture processing can be integrated into various sensors in the store to detect interactions and lack thereof with particular products. Certain gestures may be used to indicate the user’s liking or disliking of a product.

“For example, the gesture could be a hand waving toward a product, eye gazing at the product with a gesture or pointing to it. Shopping carts may also be integrated with the network for communication. Electronic shopping carts may be included in the shopping carts. These carts may also be linked to the user account. Another embodiment of the shopping cart could also include a physical shopping basket. Some embodiments allow for interactions between the electronic shopping basket and the physical shopping trolley. This is to first verify that items have been placed in or taken out of the shopping cart. IOT’s are also possible to be integrated into the network, as shown. IOT’s can take many forms and can be used to gather information inside or outside the store, or even on products. WCCs can also function as IOTs and may be embedded on various surfaces, objects, or other items that the user interacts with in the store.

“Machine Learning can also be integrated. It can be used to communicate with the different sensors, classifying and generating models. For determining when specific sensor data should not be used, promoting certain actions, motions and movements, detections or tracking. This document contains many examples of machine learning. It should be understood that machine-learning can be used to improve the detection of products on store shelves and to predict when the user will want to buy a new item. A home refrigerator item process is also shown. This may include data from a refrigerator at home. Home refrigerators may have sensors or image capture devices that can identify which items need to be bought or missing. This information can be used to populate a user’s shopping list or to suggest shopping lists. This information can also be passed to the user when they are in the store. You can integrate shelf screens into different shelves in the store.

Displays can be customized for each user who is standing in front of the shelves. This information can be personalized with data from the cloud, other digital accounts of the user, or accounts linked through the account used for shopping. You can use proximity sensing devices and integrate this processing with other sensors, as explained above. Audio steering is a method of directing sound to specific users. To identify specific users entering the store, or where they are located in the store, biometric processing can be used. Without the need for a code or phone number, biometric processing can be used to identify the person entering the store. You can also use biometric processing to identify the user by asking the user to put their finger, eyes or any other bio feature in front the sensor.

“Some embodiments of biometric sensing systems may be passive. This means that the user can be identified even if they don’t need to physically go to a kiosk or sensor. To allow payments to be made, payment systems can be connected to the network. Payments can be made using services like PayPal, Visa, banking apps, or other electronic payment systems. The network can be linked to user accounts and user profiles, which will allow the user access. A user account can be linked to the application used by the user while shopping in the store. User profile information can include historical data, references and explicit preferences. It can also include shopping list, social friends, activities, and any other information the user consents to being included in the profile.

The user’s calendar can be accessed to view information such as activities. These activities may contain information about goods or products that the user might need. This information can be filled in or given to the user while they are in the store. It is important to understand that a store doesn’t necessarily need to be a grocery shop. However, it can provide any type of goods and services for products or items. The network can also be used to pair smart devices with each other, such that they can detect when the user is present in the store. Smart devices include smart phones, smart watches and smart glasses as well as wearable devices such IOT’s and WCC devices. In some stores, ultrasonic sensors can be embedded to detect when items or users are moved, patterns of motion, and other information.

“FIG. 1C is an example of machine learning. A machine learning model can be trained with data from many sources, including shopping accounts or shoppers. Data may be sourced from the real world, or data that has been manipulated to represent the desired features for a specific label classification. FIG. FIG. 1B shows how the model can be trained or retrained by using feedback from actual shopping activity. Here, corrections and confirmation labels can be fed back to the model. These corrections and confirmation labels can be obtained from the users by their implied or explicit acceptance (confirmation), of the take events or any other classifications, or through flagging of incorrect system classifications (e.g. .).”Correction: Incorrect item found in basket, etc

“In other words: When a human or another system finds a state other than that which was determined by the machine-learning algorithm, the source could override the output classification provided to the machine learning model. The data label for the feature list must be corrected. One configuration sees the model being retrained with the correct classification for the feature set, resulting in an incorrect output classification. This will help the model to learn how to classify its tracking elements to the correct status over time. This document includes information about how to classify take and return events for particular items from shelves. It also provides guidance and feedback on how to link the classification change to events, rewards, incentives and sales.

“FIG. 1D shows an example flowchart that can be used to track the purchase of an item selected by a user in a cashier-less transaction. This is an example of a process. It is important to understand that different elements and steps can be added, or removed depending on the particular configuration. Opera 10 identifies the presence of multiple items on a single or more shelves. Sensors can be used to identify the presence of items, as explained above.

The sensors can be either image or non-image sensors. They can also vary depending on the item configuration. Operation 12 identifies the presence of a customer in the store. A user’s presence in the store can be interpreted as the user entering the shop and being identified. The smart device allows the user to check-in and can identify the user. Another embodiment allows the user to simply check out, and not need to check in. If the transaction is cashier-less, checking out can be as simple as walking out of the shop. Additionally, multiple users can be identified within the store at once to identify the user.

“In operation 14, interactions between the user and an item are identified. Multiple sensors can be used to identify the interaction. You can identify the interaction by tracking the eye of the user or tracking the item. The item 18 is added to the user’s shopping list. The user can add the item to their shopping cart by adding it to their shopping list. One embodiment allows the item to be added to the shopping cart until the user decides to leave the store. The item will then be verified and added to the payment confirmation or payment.

“Example: In operation 20, it is determined whether the user has left the store or if there is an area in the store that the user can purchase the item. This can be done by following the user’s movements throughout the store, or tracking the user moving around the store from entry through exit. Another way to track the user leaving a store or area is to use signals or tracking of the device. You can track the user’s device using a smart phone, smartwatch, wearable device or IOT device. Some embodiments allow RFID tags to be attached to items to track when they are moved around the store, exit the store, or moved to other areas.

These determinations can then used confidence scores, machine learning and artificial intelligence, as well as neural networks, to analyze them. Opera 22 allows the user to be charged for the item in their shopping cart if the item is removed from the area or store by operation 20. The user will be charged in cashier-less form, meaning that they do not need to go into the cashier or use an automated cashier system.

“FIG. “FIG. Operation 40 involves a server receiving sensor data about items on the store shelves. The server could be located near the store or far away. A process that is executed by multiple applications may be called the server. A process executed by an application or multiple applications may be called the server. The cloud processing system might use the data center that has one or more computers to process transactions related to tracking items on store shelves.

“In operation 42 the server can receive interaction information of an item on the shelf of the store from the user. An interaction database is used to identify the type of item. It could be cereal from a specific brand, a package of particular type, the good, or any other product or item that is available for sale in a shop. As mentioned, the item doesn’t have to be related with the grocery item. Any retail item can be tracked using one to three sensors. The interaction data can identify the item type. Certain items can be associated to specific locations in the store or on the shelf. Milk may be kept on refrigerated shelves.

Dry goods can be stored on other types of shelves. You can store produce in different areas or shelves. You can store pants and shirts in different kinds of shelves or compartments in a shop. You can store electronic devices in different places in the store. This allows electronic devices to be stored or organized in different locations within the store. Different types of sensors can be used to track interaction data and identify items that are associated with them. One embodiment allows the user to add the item to an electronic shopping basket. This is done by creating a user account that will allow them to process a cashier list purchase transaction. If the user interacts with the item, it may be removed from the shelf.

The item can be carried in a visible shopping cart, a bag or any other means of removing it from the shelf. An electronic shopping cart is an electronic list that can be tallied and calculated to determine the items selected and taken by the user. The price can also be associated or identified. An application that the user executes on a mobile device can associate the electronic shopping card with the application. The application cannot be simply tracked and processed by a server app for the user’s account. Operation 44 is when the server receives data that indicates that the user left an area of the store and that the user purchased the item. This indicates that the user has left an area in which he or she might not want to return the item and has simply left the retail location.

To determine if the user has left the store or gone elsewhere, the user can be tracked in a variety of ways. Tracking may include tracking the user as they move around the store or interact with the items. Tags or codes on the item themselves can also be tracked to determine when the user leaves the store. This can help identify if the user is leaving or returning to the store. After confirming that the user has exited the shop or left the area, operation 46 processes an electronic charge to the payment service. An electronic charge may include sending or signaling information to the payment service in order to debit the user’s billing account.

FIG. “In FIG. 1E, it is important to understand that processing can be performed by any number of computers. Other embodiments allow processing to occur or be handled by WCC devices. These can be local or remote to the store, or personal to the user.

“In various embodiments, a smart retail outlet can classify shopping activity for one or several shoppers into one or multiple categories. If a user is determined to fit in any one of these categories or to change category, an action can then be taken to respond dynamically to the state. The store can track the items taken into its possession by the shopper and detect any returns to the shelves. It can also identify the misplaced items on the wrong shelves. One embodiment uses machine learning algorithms to predict the shopping list of a customer. One configuration allows a retail outlet to use a combination of input data and machine learning models to classify shopping behavior. This allows a family or group to take and return items to one shopping account. The present invention also allows for the identification of each shopper who takes or returns items from the shelf, even if multiple shoppers are nearby and interfacing with the shelf simultaneously.

Summary for “Machine learning methods, systems and methods for managing retail store processes that involve the automatic gathering items”

“Over the years there have been many advances in the field of processing devices and devices which communicate over networks. Electronic devices, for example, are usually designed for specific purposes. Some devices, like the smartphone and general-purpose computers, are more versatile. Although these devices are versatile and powerful, they require network connections to send or receive data. Internet service providers (ISPs) provide network connections. These connections are available for private use (e.g. homes or businesses) and can also be obtained in public areas. Before access can be granted, users must establish connections via either their devices or interfaces.

“Devices that have network access to exchange data need to be able to manage setups and provide adequate power to process the data. Unfortunately, these requirements can be a hindrance to simplifying devices that could gain from data exchanges over the internet. This is why embodiments in this disclosure are needed.

“The inventions described herein concern communications devices, communication methods and methods for using digital detection, sensing, and tracking systems to determine interactions between retail items in order to facilitate cashier-less transactions. One embodiment provides a method to track items in a shop and detect interactions using one or more sensors. To determine whether the user intends to buy the item, the interactions can be classified and identified. A smart retail outlet can classify shopping activity for one or several shoppers into one or multiple categories. If a user is determined to fit in one of these categories or to change category, an action can then be taken to respond dynamically to the status of the shopper. The store can track the items taken into its possession by the shopper and detect any returns to the shelves. It can also identify the misplaced items on the wrong shelves. One embodiment uses machine learning algorithms to predict the shopping list of a customer. One configuration allows a retail outlet to use a combination of input data and machine learning models to classify shopping behavior. This allows a family or group to take and return items to one shopping account. The present invention also allows for the identification of each shopper who takes or returns items from the shelf, even if multiple shoppers are nearby and interfacing with the shelf simultaneously.

“In some embodiments, processing tracking data includes accessing deep learning models using feature input of at most user location and shopping histories. The feature input includes time-based actions characterizing past user shopping history and global positioning data that the user has enabled to enable identification of the user’s location based on the location data received from their portable device.

“In some embodiments, a portable device’s current route is a predicted or actual path that the user is expected or is expected to take from one location to another location. The predicted path is calculated using either global positioning system (GPS), dead reckoning, or GPS and machine-learning to classify that the portable device’s current route is still heading to the store.

“In some embodiments, instructions for creating the task to pre-gather the items include an identification of the items from the shopping lists and a time frame within which the user must enter the store or when the user will receive one of the items.

“Some embodiments of sending instructions trigger based on (A), a calculation of expected time needed to gather one or several of the items and (B), a forecasted future date in which the user will either arrive in the store or need the one or multiple items ready for pickup.”

“In some embodiments, if the current route is based on processed global positioning system data (GPS) of the portable device and if at least one processing entity associated to the store has not received data that indicates that the user isn’t planning to visit, not likely to visit, or plans to visit the shop at a different time, the user will remain headed to it.”

“In some embodiments, an indicator is a payload or signal on the portable device of the user that is associated to the user account or output on an instore device or output from the store that indicates that one or more items have been ready for pickup.”

“In some embodiments, an indicator is a state that has been received by at least one said processing entity in connection to a status and a description one or more of the items that have been gathered using the shopping list.”

“In some embodiments, an indicator is an actual receipt for the pre-gathered items.”

“In some cases, items available for pickup can be identified when the user enters or leaves the store after arriving at the store.

“In some embodiments, items ready for pickup can be assigned to the user immediately upon entry to the store or after arriving at the store or before exiting the shop.”

“In some embodiments, pre-collected items are items pre-designed on the user?s shopping lists or a subset thereof. They are items the user can purchase to analyze the user?s shopping history, user shopping history, or data from at minimum one processing entity that is capable of keeping track or consumption of items associated with him.”

“In some embodiments, an item which is already gathered or is scheduled to be gathered is removed from the shopping cart or marked on the shopping basket as being gathered.”

“In some embodiments, based upon receipt of data from the mobile device of the user or an associated processing entity, the user remains heading to the shop. This confirms or predicts that the user is still headed to or intends to go to the store.”

“In some embodiments at least one or several of the identified items or items on the shopping lists are automatically identified using a deep learning model that predicts whether one or more items will be purchased by the user.”

“In some embodiments, the store has tracking sensors. These tracking sensors are in communication with at minimum one of the processing entities to monitor interaction data between the user with one or more items in store. If the processing entity determines that the user has taken the item that wasn’t already pre-gathered by the user, it sends a guidance message either to the portable device or to a speaker in the store or to a display or other store assets proximate to the user.

“In some embodiments, the store has tracking sensors. The tracking sensors are in communication with at most one of the processing entities to monitor a user’s location in the shop. At least one processing entity sends a wireless payload to a store clerk to indicate a user’s current location in the store. In this way, the package of one or more items ordered and gathered by the clerk is delivered to the user at the user’s current location in the stores.

“In some embodiments, the store can be further configured with tracking sensor, said tracking sensors being in communication to at least one said processing entity for monitoring the location of a user in store, wherein said processing entity provides the location of the customer in store to a robot. The robot has coupled thereto a package containing said one or more items ordered from the store and collected, where the robot delivers the items at the current location of user in store.”

“In some embodiments, the store has tracking sensors. These sensors are in communication with at most one of the processing entities to monitor a user’s changing location inside the store and to process interaction data as input features into a machine learning model. The processing entities can also access the user’s account to allow the user to select additional items. Selection of items is based in part upon the user’s path through the store.

“In some instances, the selection of items that are pre-gathered is based upon the user’s path through the store, the time the user allocates to the store, the preference of the user or a promotion for the store.”

“In some embodiments, the store can be further configured with tracking sensor, said tracking sensors being in communication to at least one said processing entity for monitoring a changing position of the customer inside the store and providing navigation assistance to an asset or portable device of that user. The guidance assists the user in traversing the store along a navigation route through isles that contain one item or more on the shopping list. One of said processing entities may further access the user account to select additional items, the selection being based in part upon the user’s movement within the store, location, on- or on a certain amount of time or the other users, as well as well as well a user’s or the store user or the location of others, or on the user or the user or the user or the user or the user or the user or the shop,

“In some embodiments, each one or more processing entities are defined by a computer or server or Lambda function processes, a Lambda function or function or process or service or container, container, container process, cloud processing system or smartphone.

“Wherein each of the said processing entities is coupled or interconnected with a network that allows access to storage, data, and instructions.”

“In some embodiments, confirmation that the current route is of the portable device remains heading to the shop includes receiving information from user account as provided by user that the user is currently headed towards the store or is heading there at a specific time.”

“In one embodiment, a process is provided to process requests for items that are pre-collected from a store. One or more processing entities are used to process the requests. One way is to receive tracking data from a portable device that is associated with an online account. A shopping list associated with an online store account is used to identify one or more items. The tracking data is processed to determine the current route of the device to the store. To create an order for pre-gathering one item or more from the store, instructions are sent. When the device’s current route is confirmed, the instructions are sent. This method also involves receiving an indication that one or more items have been gathered. This method sends a notification to the user’s online account that the ordered package has been gathered and is ready for pick-up at the store.

“One embodiment of a method involves identifying a shopper at a store using sensors, and associating that shopper with a shopping account. One or more sensors are used to monitor the shopper at the store. The output of these sensors is input to one or several deep learning models, which generate classification data for a given scenario. This method involves receiving natural language voice input from the shopper that contains words. The voice input is processed along with the data classification to produce a response to the words said by the shopper. The context in which the response is given is important.

“In some embodiments the one- or more sensors used to monitor the shopper are linked with the store and coordinated in order to identify the shopper’s movement while in the shop and relative movement of the customer relative to one of the items in the store. The scenario is associated to actions taken by the shopper with respect to said items and said movements by the one/more sensors. This response is within the context of the scenario.”

“In some embodiments the one or several sensors used to monitor the shopper can be linked with the store and coordinated in order to identify interaction of said shopper with said item or items. The scenario is associated to said interactions by the shoppers in relation to one of said items and said response is within the context of interactions that relate to this scenario.”

“In some embodiments, dynamically changing the response to a shopper’s interactions with said one or more items is possible. This is because shopper movement around the store places them closer or further to specific items.

“In some embodiments said reply includes guidance information that directs the shopper to move toward a particular item or group of items. In some cases beam formation is used in a capture audio.”

“In some embodiments the guidance includes one or more directions to move a shopper towards a direction or directions that the shopper should follow to move to another area within the store where the item or group of items is located.”

“In some instances, the classification data contains an indication that the shopper took an item from a store shelf or returned it to the shelf.

“In some embodiments, the devices include a wireless communication chip as well as an integrated power generating device or delivering device. These devices are called wireless coded communication (WCC), in one embodiment. These devices can harness power to enable or cause activation of a communication system to transmit data. Data can be pre-configured to log events, state and cause actions, send messages, or request data from end nodes. Some embodiments enable wireless communication, which allows access to the Internet. It also allows cloud processing of data received, as well as processing data that is returned or transmitted.

A WCC device, as it is commonly known, is one that is equipped with wireless transmission capabilities (e.g., a transmitter or transceiver, Wi Fi chip, Bluetooth chip and radio communication chip). A power supply or power pump is also required.

“In one embodiment, the method is for tracking items in a physical shop for cashier-less purchase transactions. The method involves detecting a wireless coded communication (WCC), portable device, in the physical shop. A WCC device can be associated with an online account of a customer. A server then receives sensor data about the WCC device’s location in the physical shop and its proximity to other items. This involves the server receiving interaction data from an item on the shelf of the physical shop by the shopper, using one or more sensors in the physical store and the WCC to determine if the item is one that can be purchased. The interaction data can be used to identify the type of item and to add it to the electronic shopping cart. This method also includes the processing of an electronic charge by the server to a payment service used by the shopper for said item. WCC devices can also be used as portable devices. The WCC device, or any portable device, can run an application for cashier-less transactions.

A power pump device can be configured in one configuration to receive force or movement inputs from an object or user. A force can be an intentional input from the user. For example, pushing a button or moving a slider. An indirect force is another type of force that the user does not intend to use. Force is created when the user moves or lifts, shifts or otherwise changes the orientation or position of a physical object.

“A physical object can be, for example, a door. The intent of the user is to close the door. Some embodiments may use a physical object such as a shelf or a retail item. In one embodiment, the indirect force of closing a door is transferred to a WCC unit. The WCC device was able to receive the force even though the user didn’t intend to input it. The input could be received indirectly by a WCC device that is powered by a battery. This means that the input to the WCC device can also be received indirectly. This input is used to activate the WCC device’s process and to transmit or request data wirelessly via a network.

“One embodiment provides a method of tracking items in a shop for cashier-less purchase transactions. This method involves receiving sensor data about items on shelves in a store and receiving interaction data from a user regarding an item on a shelf in the store. If a take event has been confirmed, the interaction data can be used to identify the type of the item and add it into an electronic shopping basket of a user. This account is used to process the cashier-less transactions. This embodiment involves the server receiving data that indicates that said user left an area of the shop and that it is evidence that the user purchased the item. After confirming that the user has left the store, the server can process or instruct the payment of an electronic charge to the user’s account.

“In some embodiments, the interaction data is obtained by tracking the volume or area where the item is located. This interaction data can be used to track movement of the item, determine if it has been removed from the shelf, and to add the item to an electronic shopping cart.

“In some embodiments, said movements are identified at minimum using image data from a camcorder. The camera can produce image data which is part of the sensor data received by the server.

“In some embodiments, the confidence level in identifying said movement is increased using input from one or several other sensors. Each sensor is given a weighting that assigns a particular sensor more or less significance in said identification of the movement.”

“In certain embodiments, the method also includes the receipt by a server of data indicative that a user device is present in the store. This data is used to identify the user account for the electronic shopping cart that allows the user to purchase one or more items in store using one of the cashier-less transactions.

“In certain embodiments, the method also includes the receipt by a server of tracking data regarding the user’s movement in the store. The movement of the users is indicative of browsing said items on one or several shelves. Tracking includes tracking the hand of the customer, which is the interaction data that the user has with the item.

“In some embodiments the method also includes sending by the server display information to render on a display screen that’s proximate the shelf housing the item the user is interfacing with.” Based on the profile information of the user, the display information is tailored for the user.

“In some embodiments the method also includes sending audio data by the server from a region in the store that’s proximate the shelf housing the item. The audio data is customized for the user based upon information from the profile associated with the user account.

“In some embodiments, audio data can be output in a direction tracking format. The directional track format implements audio steering to direct audio data towards a user’s head. This audio steering concentrates sound for the user and excludes distributions of sound substantially beyond their head.

“Some embodiments of the method include receiving from the server eye or head gaz information about the user. The user’s actions with an item are indicated by their eye or head gaz information. To determine the product information that interests you, the eye and head gaze information are collected. The server collects product information relevant to the user. It then uses this information to make recommendations for other items and/or offer discounts or promotional information.

“In some embodiments, this method also includes receiving data from the server that indicates that there was a take event for the item. Second data indicates that the tag of the item communicates with the user device. The user’s device verifies that the item was taken into consideration and not returned before the electronic charge is processed. The store has at least one sensor that can monitor when the user is leaving the store.

“In some embodiments, this method also includes receiving proximity data from the user and a third user by the server. The user and the second users are configured to be close to each other near the shelf. It is possible to distinguish between the user or the second user using the proximity data. Cameras are placed in the store to generate part of the sensor data. The cameras produce image data that can be tracked to identify the limbs of each user and the second person. This data can then be used to track the movement of said limbs and verify that the user is actually interacting with the item.

“In some embodiments, interaction data contains one or more features that describe the interaction. These features are used as inputs into a model to classify the interaction as one of a take or a return event of the item from shelf or back to shelf.

“In some embodiments, classification is provided. This uses the model to assist in predicting the actions of the user or other users. This model refines its classification based on previous interactions. It then assists in deciding whether or not to add the item or additional items to the electronic shopping basket of the user or another user.

“In some embodiments, classifying uses the model in order to generate recommendations to the user (or other users) based on learned probable actions taken by users when they interact with the item(s). The continuous learning process uses inputs from the store and/or other shops that are connected to a shopping system that allows for cashier-less transactions.

“In one embodiment, the invention provides a method to track items in a store and process a cashierless purchase transaction. This method collects sensor data about items on shelves in a store. This method can also be used to identify the user who enters the store. This method requires at least one device that has wireless communication. An application is installed on the device and associated with a user account. Tracking the user’s movements in the store is also possible. Tracked movements include the detection of proximity to shelves containing an item by the user. The interaction data of the user with the item is also detected. This interaction data can be used to identify the type of the item and allow you to add it to your electronic shopping cart. This method also includes receiving data that indicates when the user leaves an area of the shop while the item is in electronic shopping cart. The user is expected to leave the area if they intend to buy the item. One or more sensors are used to confirm that the user has left the designated area. This method involves the processing of an electronic charge to a payment system associated with the user account for the item based upon said confirming.”

“If the WCC device uses power pumps, the force, pressure, or movement received can then be transmitted or imparted to a mechanically flexible element or device. One embodiment may use a flexible element that is a piezoelectric. A piezoelectric is a device that produces a voltage by being subject to force or movement. The result is a voltage. The voltage can be stored in a storage device or harvested. One embodiment of the storage device could be a capacitive storage device (e.g. a capacitor). Other embodiments allow the storage device to be either a battery, or a cell or combination of cells that can recharge. If the storage device is capacitive, it will be used as a power source for an integrated circuit of the WCC.

“In one embodiment, an integrated circuit can communicate data, a code or message to an end-node. The received data can be stored by the end node, or an action performed, e.g. based on preprogramming.

“In one embodiment, a wireless coded communications (WCC) device is used. In one configuration, the WCC device is passive in that it is not connected to any active power, such as a battery or power cord. The WCC device can be activated for a time when a force, such as a mechanical force, is applied to its power pump. This allows it to transmit and process coded data to the appropriate end point or end node. A WCC device can be programmed with multiple functions or methods in some embodiments. A pre-activation setting can help you choose the right method or function. Other times, the method will be selected automatically based on available power. Methods that use more power will be performed if there is more power. A payload may be generated by certain embodiments if an input feature, such as a slider, switch, slider or selection control, is coupled to the WCC. The payload can be sent remotely to an end node, processed locally by the WCC, or both.

In some cases, functions can be selected and triggered based on the results of the processing of the payload prior, during, or after transmission. The WCC can detect or image an identity or attribute of a user. This includes a biometric signature and fingerprint. It also allows for the detection or imaging of a scene, object position, QR code or RFID code. Status, temperature, pressure, presence or absence of conditions, vibration, and any signal, source, that is coupled to, near, or within the sensing range of one sensor coupled to or integrated with a WCC device.

“In other embodiments devices, systems, or methods are provided. A wireless coded communication device (WCC), which can be used to communicate wirelessly with other devices, such as over a network, is one example of such a device. A WCC is a type of internet of things device (JOT). It can sense, process, send, respond to, and exchange data with WCC devices, network devices, user devices, and/or other systems over the internet. A WCC device can include a power source to enable low-power usage, such as sending data, asking for data, and/or communicating wirelessly. WCC devices can be used as standalone devices, or integrated into other devices. A WCC device can include power harvesting circuitry in some configurations. WCC devices can be pre-configured to log events, state and cause actions, send messages, or request data from end nodes. Some configurations enable wireless communication, which allows access to the Internet. It also allows cloud processing of data received, as well as processing data that is returned or transmitted.

Retail embodiments are stores or outlets that have a network of sensors to track user interactions with the items for sale. One or more cameras, motions sensors and ultrasonic sensors are used in some embodiments to track user interaction with items. Some embodiments include confirmation systems that allow you to know when an item has already been taken.

The item will be charged to the user’s account if the user walks out of the store and then moves to an area outside the store. An online account may be used by the user. The charging event could result in a debit to the credit card or payment account linked with the account. One embodiment of the retail store may have one or more local computing systems that can interface with the sensors or other systems used to track users or items, identify items, or interact with them. Some embodiments may link the local computer with cloud computing that connects processing with one or more systems, servers, and/or other systems. This allows for processing instructions to determine whether users interact with items in a shop, when they are considered take events, and when an item should be charged.

“The systems can also process historical interaction from the user or other users to determine whether items will be bought and/or removed from a shelf. A variety of learning processes can be performed by cloud and local computing to make predictions and make assumptions. This information can be useful information for the shopper. For example, it may provide real-time information about items. This information can be customized for each user. For example, custom display screens can be placed near items or goods. In some embodiments, audio can also be provided to the user to provide custom information about an item or good. Another embodiment allows tracking users and tracking items to share sensors (e.g. sensor fusion) to ensure that the determinations are true. Multiple sensors can be used, for example, to determine whether a user actually took an item off the shelf. There are many sensors that can be used to determine whether a user has actually picked up an item from the shelf. Some may be weight sensors while others may be image sensors (e.g. cameras) and others may be motion sensors. There are many sensors that can be fused. Some embodiments use weighting to determine which sensors are the most crucial for a particular interaction. Other embodiments use weighting to determine which sensors provide the most reliable data and which ones may be less important. Because of the different ways users interact with shelves, weighting can be dynamic.

“For instance, users might be different in height and size, and their methods of holding items can differ. Users may also be standing at different angles or heights relative to shelves. This could affect the way that things are stored and moved. Other situations may require multiple sensors to identify which user is actually using the item or good. This is important as it ensures that the item’s assignment and take-even are verified to the right user to avoid false positives. An audio indicator can be provided to users when an item is added into their virtual shopping basket. The audio could be provided by a speaker at the store, or from the user’s device, such as a smartwatch or phone or other wearable device, like glasses.

“In the various embodiments herein, some devices can be powered devices and others may harvest energy. Many examples will be given regarding wireless coded communications (WCC) devices that may act as sensors. The data may be captured by WCCs and sent to a network for processing.

“FIG. “FIG. 1A shows an example of a user buying in a store in accordance to one embodiment of this invention. Store shelves can be found in different locations within the store. A plurality of sensors are installed on store shelves and in areas around them. These sensors can generate sensor output that can then be processed to determine when a take occurred for an item that was stored on the shelf. The user is holding an item, while the item was previously placed on the shelf. The take event is a way to identify when the user removed or interacted on an item on the shelf. The user appears to be holding the item in his hands and reading the label. One configuration is available, which can be optional: in ID, code, RFID tag or WCC are integrated with the product.

This information can be found on the product’s label and used to track the product in store or when it leaves by one or more detectors. These codes can be used in one embodiment to track the time products leave the store using RFID sensors or other tracking methods. Another embodiment uses codes to identify when products are placed in a shopping cart or shopping basket. You can install detectors on the shopping basket or physical shopping cart that will identify which items are inside. This can be helpful when other shoppers have selected items from the store shelves. One embodiment allows the user to pair their electronic device to the physical shopping basket to track when items are added to or removed from the cart.

“In some embodiments, the interaction detected may be associated with the user touching, touching, or reading the product label. If the user returns the product back to the manufacturer, a return event will also be detected. A plurality of proximity sensors may be placed near different rows of shelves, as illustrated. Other embodiments allow proximity sensors to detect items in multiple rows. Further embodiments can include motion sensors that are located near the shelves or directed towards the shelves. These sensors will detect when an item is being moved or any interactions between the object and the user, such as the hand. To determine the time that specific items were removed, weight sensors may also be embedded into shelf locations. In some embodiments, weight sensors are provided at each place where items are placed. Other embodiments allow weight sensors to be placed in areas that have weight sensing capabilities for multiple items or products.

Further, the store floor can be fitted with different types of sensors. Sensors can include sensors for weight, speed, walking velocity, common routes taken during shopping events, historical motion patterns, and so forth. Floor sensors can also be used for determining the number of users present at locations (e.g. stores. Heat sensors can be used to detect when customers are within a certain range of detectors in the store. These sensors can be embedded into objects in the store, into the ceiling, into the floors, into shelves or other areas. Sensors can also be used for determining the time a user spends standing, walking, or interfacing with shelves or other items.

As illustrated above, there are many cameras that can be integrated into the store and placed within it. Side cameras are also available. These cameras can track the user’s movement, their proximity to shelves and identify when they interact with products in the store. You can place overhead cameras at different locations within the store to track the movement of customers. Overhead cameras can be used to determine the shape of the user’s body and detect if there is a limb that extends from it. This information can then be transferred into shelves.

This is useful to determine when a specific user is reaching for products and for other users that may be close by that user. User identification can be used to distinguish the person shopping with the shopping user from the one who is just accompanying them. It could be that a parent is shopping with their children, or that a friend is shopping with friends. Motion detection, skeletal and/or leg detection and tracking are useful tools to determine user’s actions, interactions, locations and movements. Some embodiments use depth sensing cameras to detect when the user’s hands touch specific areas or volumes, and/or to determine the user’s size relative to other users. The user may be shopping in the store and also have a smart-phone in his pocket.

“The user could also be wearing smart glasses or a smart watch. The user may log in to the application to shop at the store. This may allow the user to identify themselves to one or more sensors. Some embodiments allow the user to add items to their electronic shopping list when they shop for items in the store. Items that have been verified to have been removed from shelves can be added to the user’s electronic shopping list. You can also determine if the user leaves the store or moves out of an area, which will confirm that the item was purchased. Sensors can detect when the smartphone or other smart device has left the store and determine if the item was taken while it was on the users shopping cart. The electronic transaction is processed and the items are charged to the user’s account. This account may be linked to a payment service.

“In certain configurations, sensors or cameras that can identify faces or detect gazes can be installed throughout the store. This embodiment allows for users to be able to identify when they are looking at particular products and can provide information back to them that is relevant to those products. Audio output can be provided locally, for example. This can be directed to the user’s head so that they can listen. If the user is viewing a product that is food-related, the information can be sent back to him in real time to indicate whether the item is suitable for his needs. Other embodiments allow for augmented information to be presented to smart classes. This information may relate to the product being handled or interacted. Other embodiments use gaze detection and/or facial detection to detect where the user is looking in the store. This information can be used by the store’s processing logic to determine if certain areas are getting enough traffic.

This information can be used to optimize the placement of goods in the store. Some embodiments of the store may be equipped with automatic moving shelves that can adjust themselves according to user viewing patterns. This can help optimize sales. Other embodiments allow the store to provide information to facilitate the movement of shelves and items. This can help increase sales. The store owner can use browse information to give various details to optimize placement, stocking and/or removal of items from shelves. Processing can be used in some instances to determine when particular store shelves or levels are receiving more viewing. Based on the amount of interaction that users have with the products, this information can be used to give priority to certain products. This information can be shared with regional processing systems, which can then be shared with multiple stores within a chain.

“The multiple sensors that are scattered and integrated throughout the store may be collecting information as well as information about the user’s presence or interactions with specific items. Integration of machine learning systems is possible to gather information about the sensors and user interaction. This can be used to optimize data to prioritize or select certain sensors in order to make assumptions. For example, the assumptions can be used to determine when an item was removed from a shelf or when it has been returned to the shelf. Another embodiment uses the assumptions to determine the types of interactions that are taking place with particular items on shelves.

“FIG. “FIG. The various processing entities and sensors can produce different types data. They can also be optimized to track items in a specific environment such as a shop environment. This can help to identify and purchase the right items. It is important to understand that different outputs can be filtered or identified based on the type and importance of the interaction being tracked. Deep learning can be used to determine which outputs of the sensors should be chosen and which should be ignored depending on the type interaction. Machine learning can also be used to give more significance to certain sensors and decrease the significance of others, in order to increase the confidence level of the selection of the right combination of sensors that will identify specific interactions within a store.

FIG. “The various examples shown in FIG. 1B are only meant to be an example. You should understand that not all may be used, and that others that aren’t specifically identified can also be used. The outputs and communications can be processed via a network. This network can also be in communication to different types of compute entities. Computer entities may include local computing, which can include WCC devices, WCC groups, DLC devices and other processing entities. The network can also provide access to cloud computing, which can be used to communicate between user accounts of different users who are accessing applications that enable shopping in real-world physical shops.

With this in mind, we will now describe the following processes and functionality. The store can provide guidance to the customer, by tracking their position in the store and giving them information. These data can be sent to the smart device. Another embodiment of audio steering provides information back to the user. This allows the user to be tracked and followed as they move around the store.

Audio listening can be integrated into the features of the store. Audio listening can be used to identify when a user is near or proximate specific items. Audio listening can be used to identify when a user has any questions about the product. Customized responses can also be provided when the user’s present. Artificial intelligence agents may be used to respond to user questions and audio. Camera processing is used in some instances to capture images from various locations within the store. Images can be taken of items on shelves or the user walking through the store. There are many uses for image processing. Different types of cameras can be used to perform different types sensing. Motion processing can also take place, using the output of motion sensors. To determine when items are being touched in a store, motion sensors can be used. The motion sensors can also be used to track the movements of the user in the store. Other embodiments of the motion sensor can be used to track the movements of objects or items in the store, as well as multiple users who are passing through the store.

To determine when customers are looking at particular items in the store, “Eye Gaze Processing” can be used. This data can be used to give additional information or personalize information to the user based on predicted or detected interests. To determine the time that items were taken from store shelves, weight sensors can be used. Calculations can then be made to determine when items should be returned to store shelves. If the item is not in the designated place, the weight sensor can be used to indicate that the item has been misplaced. The store manager or store owner can be notified of the information regarding the lost item to help them identify the item and return it to its correct location.

There are many types of sensors that can be used to track the user. The tracking of the user can include movement of items within the store, purchase of multiple items in the store, and movement of shopping carts. There are also sensors that detect exit or entry to different areas of the store. Gesture processing can be integrated into various sensors in the store to detect interactions and lack thereof with particular products. Certain gestures may be used to indicate the user’s liking or disliking of a product.

“For example, the gesture could be a hand waving toward a product, eye gazing at the product with a gesture or pointing to it. Shopping carts may also be integrated with the network for communication. Electronic shopping carts may be included in the shopping carts. These carts may also be linked to the user account. Another embodiment of the shopping cart could also include a physical shopping basket. Some embodiments allow for interactions between the electronic shopping basket and the physical shopping trolley. This is to first verify that items have been placed in or taken out of the shopping cart. IOT’s are also possible to be integrated into the network, as shown. IOT’s can take many forms and can be used to gather information inside or outside the store, or even on products. WCCs can also function as IOTs and may be embedded on various surfaces, objects, or other items that the user interacts with in the store.

“Machine Learning can also be integrated. It can be used to communicate with the different sensors, classifying and generating models. For determining when specific sensor data should not be used, promoting certain actions, motions and movements, detections or tracking. This document contains many examples of machine learning. It should be understood that machine-learning can be used to improve the detection of products on store shelves and to predict when the user will want to buy a new item. A home refrigerator item process is also shown. This may include data from a refrigerator at home. Home refrigerators may have sensors or image capture devices that can identify which items need to be bought or missing. This information can be used to populate a user’s shopping list or to suggest shopping lists. This information can also be passed to the user when they are in the store. You can integrate shelf screens into different shelves in the store.

Displays can be customized for each user who is standing in front of the shelves. This information can be personalized with data from the cloud, other digital accounts of the user, or accounts linked through the account used for shopping. You can use proximity sensing devices and integrate this processing with other sensors, as explained above. Audio steering is a method of directing sound to specific users. To identify specific users entering the store, or where they are located in the store, biometric processing can be used. Without the need for a code or phone number, biometric processing can be used to identify the person entering the store. You can also use biometric processing to identify the user by asking the user to put their finger, eyes or any other bio feature in front the sensor.

“Some embodiments of biometric sensing systems may be passive. This means that the user can be identified even if they don’t need to physically go to a kiosk or sensor. To allow payments to be made, payment systems can be connected to the network. Payments can be made using services like PayPal, Visa, banking apps, or other electronic payment systems. The network can be linked to user accounts and user profiles, which will allow the user access. A user account can be linked to the application used by the user while shopping in the store. User profile information can include historical data, references and explicit preferences. It can also include shopping list, social friends, activities, and any other information the user consents to being included in the profile.

The user’s calendar can be accessed to view information such as activities. These activities may contain information about goods or products that the user might need. This information can be filled in or given to the user while they are in the store. It is important to understand that a store doesn’t necessarily need to be a grocery shop. However, it can provide any type of goods and services for products or items. The network can also be used to pair smart devices with each other, such that they can detect when the user is present in the store. Smart devices include smart phones, smart watches and smart glasses as well as wearable devices such IOT’s and WCC devices. In some stores, ultrasonic sensors can be embedded to detect when items or users are moved, patterns of motion, and other information.

“FIG. 1C is an example of machine learning. A machine learning model can be trained with data from many sources, including shopping accounts or shoppers. Data may be sourced from the real world, or data that has been manipulated to represent the desired features for a specific label classification. FIG. FIG. 1B shows how the model can be trained or retrained by using feedback from actual shopping activity. Here, corrections and confirmation labels can be fed back to the model. These corrections and confirmation labels can be obtained from the users by their implied or explicit acceptance (confirmation), of the take events or any other classifications, or through flagging of incorrect system classifications (e.g. .).”Correction: Incorrect item found in basket, etc

“In other words: When a human or another system finds a state other than that which was determined by the machine-learning algorithm, the source could override the output classification provided to the machine learning model. The data label for the feature list must be corrected. One configuration sees the model being retrained with the correct classification for the feature set, resulting in an incorrect output classification. This will help the model to learn how to classify its tracking elements to the correct status over time. This document includes information about how to classify take and return events for particular items from shelves. It also provides guidance and feedback on how to link the classification change to events, rewards, incentives and sales.

“FIG. 1D shows an example flowchart that can be used to track the purchase of an item selected by a user in a cashier-less transaction. This is an example of a process. It is important to understand that different elements and steps can be added, or removed depending on the particular configuration. Opera 10 identifies the presence of multiple items on a single or more shelves. Sensors can be used to identify the presence of items, as explained above.

The sensors can be either image or non-image sensors. They can also vary depending on the item configuration. Operation 12 identifies the presence of a customer in the store. A user’s presence in the store can be interpreted as the user entering the shop and being identified. The smart device allows the user to check-in and can identify the user. Another embodiment allows the user to simply check out, and not need to check in. If the transaction is cashier-less, checking out can be as simple as walking out of the shop. Additionally, multiple users can be identified within the store at once to identify the user.

“In operation 14, interactions between the user and an item are identified. Multiple sensors can be used to identify the interaction. You can identify the interaction by tracking the eye of the user or tracking the item. The item 18 is added to the user’s shopping list. The user can add the item to their shopping cart by adding it to their shopping list. One embodiment allows the item to be added to the shopping cart until the user decides to leave the store. The item will then be verified and added to the payment confirmation or payment.

“Example: In operation 20, it is determined whether the user has left the store or if there is an area in the store that the user can purchase the item. This can be done by following the user’s movements throughout the store, or tracking the user moving around the store from entry through exit. Another way to track the user leaving a store or area is to use signals or tracking of the device. You can track the user’s device using a smart phone, smartwatch, wearable device or IOT device. Some embodiments allow RFID tags to be attached to items to track when they are moved around the store, exit the store, or moved to other areas.

These determinations can then used confidence scores, machine learning and artificial intelligence, as well as neural networks, to analyze them. Opera 22 allows the user to be charged for the item in their shopping cart if the item is removed from the area or store by operation 20. The user will be charged in cashier-less form, meaning that they do not need to go into the cashier or use an automated cashier system.

“FIG. “FIG. Operation 40 involves a server receiving sensor data about items on the store shelves. The server could be located near the store or far away. A process that is executed by multiple applications may be called the server. A process executed by an application or multiple applications may be called the server. The cloud processing system might use the data center that has one or more computers to process transactions related to tracking items on store shelves.

“In operation 42 the server can receive interaction information of an item on the shelf of the store from the user. An interaction database is used to identify the type of item. It could be cereal from a specific brand, a package of particular type, the good, or any other product or item that is available for sale in a shop. As mentioned, the item doesn’t have to be related with the grocery item. Any retail item can be tracked using one to three sensors. The interaction data can identify the item type. Certain items can be associated to specific locations in the store or on the shelf. Milk may be kept on refrigerated shelves.

Dry goods can be stored on other types of shelves. You can store produce in different areas or shelves. You can store pants and shirts in different kinds of shelves or compartments in a shop. You can store electronic devices in different places in the store. This allows electronic devices to be stored or organized in different locations within the store. Different types of sensors can be used to track interaction data and identify items that are associated with them. One embodiment allows the user to add the item to an electronic shopping basket. This is done by creating a user account that will allow them to process a cashier list purchase transaction. If the user interacts with the item, it may be removed from the shelf.

The item can be carried in a visible shopping cart, a bag or any other means of removing it from the shelf. An electronic shopping cart is an electronic list that can be tallied and calculated to determine the items selected and taken by the user. The price can also be associated or identified. An application that the user executes on a mobile device can associate the electronic shopping card with the application. The application cannot be simply tracked and processed by a server app for the user’s account. Operation 44 is when the server receives data that indicates that the user left an area of the store and that the user purchased the item. This indicates that the user has left an area in which he or she might not want to return the item and has simply left the retail location.

To determine if the user has left the store or gone elsewhere, the user can be tracked in a variety of ways. Tracking may include tracking the user as they move around the store or interact with the items. Tags or codes on the item themselves can also be tracked to determine when the user leaves the store. This can help identify if the user is leaving or returning to the store. After confirming that the user has exited the shop or left the area, operation 46 processes an electronic charge to the payment service. An electronic charge may include sending or signaling information to the payment service in order to debit the user’s billing account.

FIG. “In FIG. 1E, it is important to understand that processing can be performed by any number of computers. Other embodiments allow processing to occur or be handled by WCC devices. These can be local or remote to the store, or personal to the user.

“In various embodiments, a smart retail outlet can classify shopping activity for one or several shoppers into one or multiple categories. If a user is determined to fit in any one of these categories or to change category, an action can then be taken to respond dynamically to the state. The store can track the items taken into its possession by the shopper and detect any returns to the shelves. It can also identify the misplaced items on the wrong shelves. One embodiment uses machine learning algorithms to predict the shopping list of a customer. One configuration allows a retail outlet to use a combination of input data and machine learning models to classify shopping behavior. This allows a family or group to take and return items to one shopping account. The present invention also allows for the identification of each shopper who takes or returns items from the shelf, even if multiple shoppers are nearby and interfacing with the shelf simultaneously.

Click here to view the patent on Google Patents.