Metaverse – Patrick Soon-Shiong, Nant Holdings IP LLC

Abstract for “Interference-based augmented reality hosting platform”

These platforms allow for interference-based augmented realities. Hosting platforms may include networking nodes that can analyze a scene’s digital representation to determine interference between elements. Hosting platforms use interference to adjust the presence and quality of augmented reality objects in an augmented reality experience. Element of a scene may constructively interfere, increasing the presence of Augmented Reality objects, or destructively interfere suppressing their presence.

Background for “Interference-based augmented reality hosting platform”

Augmented reality is a combination of real-world and virtual objects. The reality designers have set the rules for how individuals can interact and experience augmented realities. Individuals can access augmented reality content using their mobile phones, mobile computing platforms or other AR-capable devices. As Augmented Reality continues to invade every day life, the number of augmented reality content is growing at an alarming pace. The growing amount of augmented reality content is overwhelming.

“BUMP.com is one such augmented reality service. BUMP.com allows you to access annotations tied to individual license plates, as described in Wall Street Journal. web article titled “License to Pray?”, published Mar. 10, 2011. BUMP.com allows users to upload images of license plates to BUMP.com. The service then attempts to identify the license plate and returns annotations made by others. To interact with the content, users must have a dedicated application. BUMP.com supports only providing access to the content through their application.

“Layar? The Netherlands’ Amsterdam makes further progress in the presentation of augmented reality. It offers access to multiple layers that each layer is separate or distinct from the others. The user can choose which layer contains layers published by third-party developers. Layar allows users to choose content from multiple third-party developers, but the Layar application requires that the user select a layer. The user is only presented with one purpose content and does not experience augmented reality the same way as the real-world. Users should be able seamlessly to access and interact with augmented reality content in the future world of ever-present, augmented reality.

“From the perspective to present augmented reality context to some extent U.S. Pat. No. No. 7,899 915 to Reisman, titled “Methods and Apparatus For Browsing Using Multiple Coordinated Device Sets?”, filed May 8, 2003. It recognizes that multiple devices may be used by users. Reisman’s method allows users to switch between display and presentation devices while interacting with hypermedia. Reisman’s approach is purely for the user. He fails to recognize that the underlying augmented reality infrastructure and interference between elements in a scene can also impact the user’s experience.

“U.S. Pat. No. No. 7,904,577 to Taylor, titled “Data Transmission Protocol and Visual Display For a Networked Computer System?”, filed Mar. 31, 2008 provides support for virtual reality gaming via a protocol that supports multiple players. U.S. Pat. No. No. 7,908,462 was filed by Sung under the title?Virtual World Simulators and Methods Utilizing Parallel Processors and Computer Program Products Thereof?. 9 2010, envisions hosting a virtual workshop on parallel processing arrays of field-programmable gates arrays or graphic processors. The infrastructures are not only intended to provide infrastructure, but the user must interact with an augmented reality system.

These and all other extrinsic material discussed herein are incorporated in their entirety by reference. If a definition or usage of a term within an incorporated reference is inconsistent with the definition provided herein, that term’s definition will apply and the reference definition will not apply.

“Unless the context indicates otherwise, all ranges herein should be understood to include their endpoints. Open-ended ranges should also be considered to include practical commercial values. All lists of values should include intermediate values, unless the context otherwise indicates.

“Strangely, existing approaches to providing augmented reality content treat augment realities platforms as silos or objects. Each company creates its own hosting infrastructure to offer augmented reality services to their users. These approaches do not allow users to seamlessly move from one augmented world to the next as easily as they would in a room within a building. Existing infrastructures do not treat augmented reality objects in a separate manageable manner. However, an augmented reality infrastructure could also be pervasive. In the developed world, electricity is widespread or, more accurately, internet connectivity is universal. Similar treatment would be beneficial for Augmented Reality.”

“In a world with ubiquitous augmented realities, or associated augmented objects, where individuals interact seamlessly with the augmented realities in a seamless fashion. Individuals still need to present relevant augmented truth content, especially when features, virtual or real, can interfere with one another. The same referents that present information based upon a context fail to address interference between augmented realities or elements real and virtual. The best known art simply forces individuals to choose which elements they want to experience. This is a way to avoid interfering with other elements in the augmented reality. It is not clear from the known art that elements can interfere with each other based on their properties and attributes. Interference can be more than a filtering mechanism. Interference is the ambient interaction between present or relevant elements that creates an augmented reality experience. It can be either constructive or destructive interference.

“One or more augmented reality can be hosted by the same hosting infrastructure, such as the networking infrastructure. Or, augmented reality objects can be independent from the hosting platform. The Applicant, for example, has realized that networking nodes within a network fabric can provide augmented realities objects or other virtual constructs (e.g., cell phone, kiosks tablet computers, vehicles) to edge AR-capable device (e.g., smartphones, tablets, laptops, tablet computers etc.). The fabric can use data exchange to determine the most relevant augmented reality objects or which augmented reality is relevant for the device. This information is derived from real-world elements and the interaction between edge devices or other devices. Augmented reality context is now able to be used to determine how elements within a scene (or a location) can interact with one another to create relevant augmented reality experiences.

“There is still a need to use interference-based augmented reality platforms.”

“The inventive subject matter includes apparatus, systems, and methods that allow one to use an AR hosting platform to create an augmented experience. This is based on interference between elements of a digital representation. An AR hosting platform is one aspect of the inventive subject. A mobile device interface is used to create a virtual representation of a scene. This can be local to the device, such as a cell phone, vehicle or tablet computer. A digital representation may include data representing one or several elements of the scene. The data may include sensor data from the mobile device, other devices that are close to the scene or devices capable of recording data related to the scene. A platform may also include an object recognition engine that communicates with the mobile device interface. It can analyze the digital representation and recognize elements in the scene as target objects. Based on the digital representation, the object recognition engine can also determine the context of the scene and the target object. The engine can also identify relevant AR objects from the available AR objects based on a combination of elements (real-world elements, virtual elements etc.). The preferred embodiments of the engine can identify AR objects from available AR objects using a derived interference. This creates criteria that allow an AR experience to be presented to the user via their mobile device. An object recognition engine can configure remote devices that allow interaction with an AR object in the relevant set according to the derived interfere. The interaction can involve participating in a transaction with a commerce engine, which is a preferred embodiment. An individual could purchase the member object, or any real-world object that is part of the augmented reality.

“Various objects, features and aspects of the inventive subject matter will be more evident from the following detailed description, together with the accompanying drawings figures, in which like numerals signify like components.”

It is important to note that although the description above is based on a computer/server-based augmented reality platform (or server), there are many other configurations. These may include servers, interfaces systems, databases, agents and peers as well as engines, controllers, controllers, servers, platforms, database, clients, engine, controllers or any other type of computing device operating either individually or collectively. The computing devices include a processor that executes software instructions stored on tangible, non-transitory computer-readable storage media (e.g. hard drive, solid drive, RAM and flash, etc.). Software instructions should be used to configure the computing device to perform the functions, roles, and responsibilities described below. Particularly preferred embodiments allow the different servers, systems and databases to exchange data using standard protocols or algorithms. These protocols may be based on HTTPS, AES or public-private keys exchanges, web APIs, financial transaction protocols, or any other electronic information exchanging method. Data exchanges are preferred to be conducted over a packet switched network, such as the Internet, LAN or WAN, VPN, and other types of packet switched networks.

The disclosed techniques have many technical benefits, including the provision of an AR hosting infrastructure that allows remote devices to interact with AR object objects. The infrastructures can be used to determine the context of an AR-capable device’s interaction with AR objects.

The following discussion will provide many examples of the inventive subject matter. Each embodiment is a single combination or combination of inventive elements. However, the inventive subject matters includes all combinations of the disclosed inventive components. If one embodiment contains inventive elements A,B, and and a second embodiment includes inventive elements D and B, then the inventive matter also includes any remaining combinations of A and B, C, C or D even if they are not specifically disclosed.

“As used in this document, and unless the context requires otherwise, the term “coupled to” is defined as follows: Direct coupling is when two elements are connected to one another. In indirect coupling, at least one element is found between the elements. The terms “coupled to” and “coupled with” are used interchangeably. The terms?coupled to? and?coupled? are used interchangeably. are synonymous.”

“Overview”

“AR object interference” can be interpreted as mirroring, or otherwise simulating interference between electromagnetic waves, such as light. Interference between waves is when two or more waves interact at one location or time in a way that enhances their presence (i.e. constructive interference) or suppresses their presence at that location (i.e. destructive interference). Interference between waves can be caused by the interrelated properties of waves, such as amplitude and phase. This metaphor of interference can also be applied to augmented realities, where elements that participate in an AR object can have properties that cause them to interfere with one another to enhance or suppress their presence.

“The following discussion focuses on the inventive subject matter in the context of networking nodes and a network fabric as a whole that acts as an AR hosting platform. However, it is important to remember that interference can also apply to traditional server implementations. Servers can operate on dedicated hardware, or within the network cloud.

“In FIG. “In FIG. AR ecosystem 100 consists of a network fabric 115 made up of several interconnected networking nodes 120 that form a communication fabric. This allows edge devices 180 to exchange information across the fabric. 115 Fabric 115 may also include AR object repositories 140 that store AR objects 142. These databases are preferably network-addressable AR objects. AR-capable devices 110 can interact with fabric 115 by exchanging device data. This data may include data that is representative of a local environment or scene. The device data could include a digital representation if a real-world scene. This digital representation can be comprised of sensor data, raw and preprocessed, acquired locally by sensors 130 or acquired from other sensor-enabled devices. Digital representation, for example, can include sensor data obtained by a mobile device (e.g. cell phone, tablet computer etc.). Or even multiple mobile devices. An AR device interface can be included in the infrastructure (e.g. port, API and networking node), Fabric 115 can exchange device information with AR-capable devices 110 through this interface. Networking nodes 120 can, based on exchanged device data or other environment data, derive an AR object address 142 and present one or several AR objects 142 at the AR-capable 110. AR-capable devices 110 are preferably able to interact with AR objects142, including conducting a transaction with commerce engine190 (e.g. banking system, credit card processing and frequent flyer miles exchange, reward programs etc ).

“AR-capable device 110” typically refers to one or more types 180 of edge devices relative to networking fabric. 115. Example AR-capable devices 110 include mobile devices, cell phones, gaming consoles, kiosks, vehicles (e.g., car, plane, bus, etc. ), set top boxes, portable computers, and other computing devices that can present augmented content to users. Augmented content should be able to be presented in accordance with a user’s sense modalities (e.g. visual, audio and tactile, tastes and olfactory). To compensate for a disability, the augmented content may be converted to the user?s sense modalities. Visual AR objects 142, for example, can be presented to visually impaired people via a tactile presentation interface.

One or more sensors 130 can be used to acquire environment data within a reasonable distance of the user or the AR-capable device 110. The sensors 130 may include optical sensors and microphones as well as magnetometers, GPSs, thermometers, weather sensors, bio sensors, GPSs, magnetometers, GPSs, magneticometers, GPSs, temperature sensors, and other types of sensors. As shown, sensors 130 can be integrated with AR-capable devices 110. They could even be located far from the AR-capable devices 110. A satellite could include sensors 130 that capture data relevant to the AR-capable devices 110.

“In certain embodiments, sensors 130 collect data that is local to the user within a personal network (PAN), where the AR-capable devices 110 acts as a sensor hub. The sensor hub consolidates the sensor data and shares it with the networking fabric 115. To collect brain signals, sensors 130 could be worn as part of the user’s clothing, in their shoes, or in their head. Sensor 130 can communicate with other elements in the PAN via wired and wireless connections (e.g. Bluetooth, WiGIG. Wi-Fi. Zigbee). Examples of sensor data include medical data and position data as well as orientation data, biometric data, image, audio, video, audio, data about temperature, proximity, data about acceleration, and other data that can be captured by a sensor. The digital representation of the scene may also include data from bio-sensors or other health-related sensors. The data can be used to create a digital representation that represents a scene regardless of what data was collected. This digital representation can include raw data as well as processed data and metadata.

“Networking nodes 120 prefer to obtain environment data relevant to a real world scene of AR-capable devices 110. Environment data can contain a wide range of data that reflects the real-world environment. The environment data may include sensor data that is a digital representation or depiction of the scene in which the AR-capable device 110 acquired the sensor data. External data may also be included in the environment data, provided they are not obtained from AR-capable devices 110. The environment data could, for example, include data from a weather station or surveillance camera, another mobile phone, a webserver, a radio station, satellite, or any other source that can provide data about the environment.

“Digital representations of scenes can include environment data that extend beyond sensor data. AR data that reflects a current augmented reality can also be included in the environment data. The environment data might include information about the location of AR-capable devices 110 and virtual objects. The environment data could also include information about the operation of AR capable device 110. These include network metrics, user ID or demographics, installed firmware or software, and other types of environment information. The scene’s digital representation can be considered to include many aspects, such as the participants, the physical environment, and any other aspects beyond what the human eye can see (e.g. networking metrics

“Networking fabric 115” is a cloud of interconnected network nodes 120. This could be the Internet, or a cloud computing infrastructure such as Amazon EC2 (Google?, Rackspace?), etc. Fabric 115, a communication infrastructure that allows edge devices 180 to exchange information in a general-purpose fashion, is important. Fabric 115 can also be used to provide a platform for AR-capable devices 110 to present one or more AR objects. The network nodes 120 that make up network fabric 115 are preferably composed of computing devices capable of directing data traffic from one port to the other. Examples of networking nodes 120 are routers, hubs switches, gateways firewalls access points and other devices that can forward or route traffic. You can have a homogenous mixture or a heterogeneous combination of different node types in the fabric. The fabric may be extended into AR-capable devices 110 in some instances. This is possible when AR-capable device 110 acts as a sensor hub within an AR-capable device 110. Fabric 115 may also include one or more networks such as the Internet, a LAN or WAN, a VPN or a WLAN, peer to peer networks, cloud-based systems, ad hoc network, mesh networks or other types.

One or more AR object repository 140 that stores AR objects 142 is preferred. AR objects 142 may be stored as separate manageable objects that can be addressed from other nodes 120 or edge devices 180, commerce engines 190, AR-capable device 110, AR-capable devices 110, and even other AR objects. AR objects 142 should have at least one or more object attributes. These are metadata that represent information about the AR object 142. The object attributes may include information about object properties that could interfere with other properties in an AR experience context.

As desired, object attributes can be linked to AR objects 142. Some embodiments allow the object attributes to conform to one or several standardized namesspaces. This allows various network nodes 120 and agents, AR-capable device 110, or other components to compare one AR object with other objects in the system (e.g. contexts, AR objects or elements, etc.). A normalized namespace can be described as a global namespace which applies to all elements, including AR objects 142. It is possible to define object attributes for specific contexts. A gaming context might have its own namespace. This could be different from a shopping context or traveling context. Each type of context may have its own namespace or sub-namespaces, which could be used to identify the AR content publisher. While a first publisher might assign attributes to AR objects 142 based on their proprietary namespace, a second publisher might use a common, normalized gaming context.

“Contexts come in many forms and can be as specific or generalized as you wish. AR ecosystem 100 can treat contexts as manageable objects. A context object can be copied or moved from one node 120 to the next. Each node 120 can have access to the most relevant contexts by having context objects. Contexts can be given names, identifiers or other context attributes that represent metadata about the context or its usage. Examples of context attributes include context name, identifiers, and URLs. The context owner, context publisher, context author, context revision, and other information can all be included. A context object can also include an attribute signature that quantifies the context’s relevance to the scene or elements within the scene. The signature can be represented using criteria or rules that operate on attributes in normalized attribute namesspaces. Contexts may also be classified into one or more types of contexts. Examples of contexts include a gaming, shopping, traveling, and a work context (e.g. job, occupation, activity). You can also use them as entertainment contexts or in other categories.

AR objects 142 are able to remain at their repository 140 without the need for queries. Repositories 140 are able to distribute AR object attributes 142 amongst networking nodes 120. Networking nodes 120 can use such embodiments to compare AR object attributes derived using the digital representation of the scene and known AR object attributes in order to determine if AR object 142 is relevant to a particular context. To locate AR object 142, you can either obtain the address or derive it from the digital representation of the scene. We will discuss below the different methods of addressing AR objects.

“AR repositories 140″ are shown as separate databases within the networking fabric 115. It is possible to store AR objects 142 in separate databases. One publisher or vendor of AR objects 142 may wish to keep control over their objects. For a fee, the publisher can give access to their repository. It is possible to mix AR objects in a general-purpose repository. This allows AR objects 142 to be moved from one repository 140 or 120 to another, based on aggregated contexts derived from multiple scenes 110. One or more AR repositories 140 can be combined to form a distributed repository, where AR objects 142 are distributed across multiple components of the system. A single AR repository 140 could be spread across multiple memories 120. Each AR repository section can be addressed in the same address space, regardless of where they are located.”

“When AR object 142 is identified, the networking device can retrieve the object from the AR repository 140. AR object 142 can be made available to AR-capable devices 110 by the networking node 120. AR-capable devices 110 may be configured to interact with AR object142 according to their design, context or interference. The AR object ?+?,, which is the AR object in the example, can be presented on two AR-capable devices 110. This illustrates that AR objects can be shared and can be presented frequently because they have similar contexts. Perhaps the AR objects are shared by AR players who share a game or AR experience. Another AR object,?k, is displayed on an AR-capable 110 device to show that it could have its own context. This could be based on user identity, preferences, authorization, authentication and interference among other elements of AR ecosystem 100 or other attributes.

“Networking nodes 120 enable AR-capable device 110 to interact with AR object 142. The interaction may include a presentation of AR object 142 via a display or speaker, tactile interface, or any other interface, depending on the nature of AR object 14.2. AR objects 142 may also contain executable instructions, which can be executed using the AR-capable devices 110 and 120. Instructions can be used to represent functionality of AR object 142. A person could be near a vending machine, for example. AR object 142, a corresponding AR object, is presented to the user as a product that can be purchased. The networking node houses the functionality for conducting transactions associated with the vending machines. AR object 142 could contain the transaction functionality. This AR object can be associated with AR capable device 110 or remote servers or services or any other suitability-configured device. The code or AR object 422, based on new contexts or changes to existing contexts, can be removed by the networking node once the person has left the area.

AR objects 142 are shown in a visual format. However, it is important to understand that AR objects can also include audio formats or other formats compatible with the human senses. AR object 142 may also represent objects that are not accessible to the human senses. This is because the object’s features can be converted to correspond with the human senses. AR object 142, for example, could instruct AR-capable 110 to display a non-visible temperature gradient overlaid on a real world image of a landscape. The temperature contours were derived from an array 130 of sensors, which may be within AR-capable 110, or close to the landscape.

“Hosting Platform”

“In FIG. “In FIG. 2, an example hosting platform 200 is shown. Hosting platform 200 is shown as a network switch, but 200 could be used to host other types of infrastructure. Servers, routers and hubs can all be adapted to the inventive subject matter. These include name servers, name servers, proxy servers, access points, hot spot, and other devices that are capable of operating in a network environment as a computing device. Hosting platform 200, in preferred embodiments, is a network device capable of receiving packets and forwarding them to their destination. This applies regardless of whether the packets are associated to an augmented reality. The inventive subject matter is also applicable to traditional computing devices such as servers, clients and peers, handheld gaming devices, and other types of computing devices.

Hosting platform 200 includes a device interface 215. This interface allows hosting platform 200 (or the fabric) to interface with AR-capable devices. Device interface 215 may include one or more ports located on the networking switch in embodiments of hosting platform 200. These ports may include wired ports, such as Ethernet, optical fiber, serial and USB, Firewire or HiGig, SerDes, PCI, XAUI, etc. Other ports that require a physical connection. A port can be wired, but it does not have to be connected directly to the network node. One or more wireless ports may also be included in the ports (e.g. WUSB 802.11, WiGIG WiMAX GSM CDMA LTE UWB, near field radio, laser, Zigbee). One or more logical ports could be included in device interface 215 to allow for AR-related URLs or APIs hosted within the network node or cloud. A hosting platform 200 can allow AR-capable devices to access AR features.

“Memory230 can store one or more contexts, 232 representing known scenarios that are relevant to AR objects 242. Contexts 232 can also be considered manageable objects. They have context attribute signatures that describe criteria that must be met for a particular context 232. Hosting platform 200 can analyse the digital representation of the scene and generate attributes that correspond to recognized elements within the scene. The U.S. Patent Application Publication 2010/0257252 to Dougherty, titled “Augmented Reality Cloud Computing?”, filed April. 1, 2009, and other context-based references previously cited.”

“Hosting platform can also include object recognition engine 260, which can function as a Object Recognition-by-Context Service (ORCS) capable of recognizing real-world elements of a scene as target objects based on the digital representation of the scene. An AR-capable device or other sensing devices can provide digital data to create a digital representation of a scene that includes one or more real elements. Digital data can be used to represent the scene, including image data and medical data as well as position, orientation, haptic, biometric, and other data.

“Object recognition engine 262 can use one or more algorithms for recognizing elements in a scene. Engine 260 will recognize elements as target objects if it is using ORCS. The elements in a scene can be real-world or virtual. The elements may have derived attributes that were obtained through analysis of the digital representation, or attributes that were obtained from the target objects. In some instances, object attributes 244 may include target attributes that are associated with known target objects (e.g. buildings, plants or people). Examples attributes could include the features of objects in the scene, such as color, shape and size, speech modulations, words or iris. Other features, such as frequencies, image resolution, and so forth. Other types of features. Acceptable algorithms include SIFT, SURF and ViPR. The U.S. Pat. describes acceptable techniques for processing data to identify target items. Nos. 7,016,532 ; 7,477/780 ; 7,680 324 ; 7,565,008 ; and 7,564,469

“Object recognition engine 262 can use environment attributes (e.g. known target object attributes and derived attributes etc.). To recognize one or more objects in a real-world scene. Once a target object has been identified, the object information associated to it can be used to determine whether any contexts 232 relate to the target. To make this determination, the context attribute signature can be used to compare the attributes of the target object and other attributes of environment. When the different types of objects are compared (e.g. AR objects, contexts elements, target objects and interferences), it is possible to make the determination. The system can perform a lookup if the attributes names are aligned. A comparison can also be used to verify that the context attribute signatures have been satisfied. The satisfaction can be based upon the values of the attributes in relation to the requirements or conditions of the signatures. A gaming context may require that at least one of the players be recognized in order to be considered to be part of a scene.

“Attributes for the target object can match with context 232. A real-world element, such as a person or vending machine, kiosk, sign, etc. could be used. A game goal might be one example. The object recognition engine 265 recognizes the game goal when it is captured or sensed. This causes the platform 200 to tell the AR-capable device 242 to give the AR object 242 for a reward. Real-world elements can then be associated with the corresponding context 232 using derived environment attributes and AR object attributes. Once one or more contexts are determined to be relevant to recognized elements, the object recognition engine 265 can scan through AR objects 242 in order to determine which AR objects refer to AR actualities 242. It is possible that AR objects 242 could be considered to relate to contexts 232. AR reality is meant to communicate the meaning of all AR objects or augmented realities that are currently available.

“There are several important points to be aware of. AR repository 240 may contain a large number of AR objects 242 which could be accessed using different contexts 232. The AR object 242 that is available represents only a small subset of AR objects 242 in AR reality. The available AR objects 242 can be viewed as a fraction of total AR objects 242, which are accessible through authorized access to contexts 232, assuming that authentication is done properly. Further, the relevant AR objects 242 represent a fraction of all available AR objects 242. The context-related objects 242 make up the set of relevant AR objects 242. It is important to understand that member objects in the relevant AR objects 242 may or might not be presented individually. The set’s member objects are presented in a derived interference between elements of the scene.

“One should remember that AR object 242 is not stored in memory 230. Memory 230 could store AR object attributes 244. Platform 200 can determine which AR objects 242, or none, are relevant to context 232 that relates to the AR-capable devices current environment. Platform 200 can then access the AR Object Addressing Agent 220 (AOAA 225), which can determine the address of AR objects. Remote nodes may also be affected.

An AR object address can be derived by many methods using “AOAA 220.” If authorization or authentication is granted, then the AR object attributes 244 may include the address from which the AR object 242 could be retrieved. This is a good option when platform 200 memory requirements are higher. The AOAA 220 is shown as part of platform 200. However, the functionality of AOAA 220 or the object recognition engines 260 can be found in other devices, networking nodes and AR-capable devices (e.g. mobile phones, vehicles, etc.). ), or any other components of the networking fabric.

“AOAA 220 also has another method of obtaining an AR object address. This involves converting at most some attributes (e.g. environment attributes, derived attribute, target object attributes and context attributes) to a minimum. Directly into an address within an area. The derived attributes from the environment data can be quantified, and then converted into a vector of attributes. This could be based on standardized namespaces, as we have discussed before. To generate a hash value, the vector is run through a deterministic operation, such as a hash function. The hash space represents an address space. An address can be assigned to the networking nodes, AR objects 242, contexts 232, or other items within the ecosystem’s hash space. AR objects 242 may be stored on nodes with addresses close to the AR object’s address. The address is created by the networking node. A request for an AR object is sent to a neighboring network node with a closer address to the AR object’s address. These addressing methods are similar to those used in peer-to?peer file sharing protocols. A part of the inventive subject matter includes applying distributed addressing techniques for AR objects within a networking infrastructure environment.

By using different functions to create an address, “Platform 200” can be used to distinguish between augmented realities. To derive an address that represents an augmented reality, a first function could be used, such as a hash, or another type of function. To generate an AR object address, a second function can be applied. The first portion of some embodiments could be a prefix (e.g., domain name, DOI prefix), etc. The second portion is a suffix (e.g. URL address, DOI suffix etc.). A context can also be represented by the address scheme, prefix, suffix or any other extension. An address for an AR object may have the form ‘www. .com/ / ? where each set is of angle brackets (??>? Each set of angle brackets (? >) indicates a part of an AR object address in multiple parts.

Another way to address could be to convert environment attributes or other attributes into network addresses where AR object 242 is found. These attributes could be used to index into a lookup list shared by all nodes that have AR objects 242 and the corresponding addresses. A network address can be either a domain name, URL or hash space.

“No matter what addressing scheme you use, AR object addresses generated using the AOAA 220 point at a location for a corresponding AR objects 242 in one of the AR repositories 240. This is true even if the repositories or objects are not located on the hosting platform 200. The AR object address can be obtained directly or indirectly from real-world objects that are recognized as target objects, context 232 or other elements with attribute information, as discussed above. Examples of AR object addresses include a domain name and URL. They can also contain an IP address, a GuiD, a GUID, or any other type of address. Each AR object 242 may be assigned an IP address in some embodiments (e.g. IPv4, IPv6, etc.). Directly addressable via one or more protocols, such as DNS, HTTP FTP SSL SSL SSH etc. Each AR object 242 could, for example, have an IP address in an IPv6 environment. It could also have its domain name and the corresponding URLs. AR objects 242 in such embodiments can be found using known techniques, including DNS, name servers or other address resolution methods.

Hosting platform 200 can be viewed as a networking node or server. However, it is important to understand that the functionality of hosting platforms 200, as represented by its components, can be integrated into AR-capable device (e.g. mobile devices, tablets and cell phones). An example is that a cell phone can be set up with one or more modules, such as software instructions, hardware, and so on. Offering capabilities such as AOAA 220 and object recognition engine 260, device interfacing 215, and other capabilities. Device interface 215 could be described as a set APIs that allow the phone to exchange data with the hosting platform 200 and its components. Other embodiments allow AR-capable devices to share the roles and responsibilities of hosting platform 200 with other devices. A cell phone could use a local object recognition engine (260) to recognize common objects such as faces, money, bar codes, and so on. A digital representation of the scene can be transmitted to a more powerful remote object recognition engine (260), which can recognize specific objects such as a person’s face or context .

“Element Interference”

“FIG. “FIG. An AR hosting platform obtains digital representation 334, preferably via a device interface. Digital representation 334 is data that represents at least a portion of a real world scene. Real-world elements can be elements that include objects, real-world elements, or AR objects. One or more remote sensors or devices can capture scene-related data, as described previously. Digital representation 334 is possible. Digital representation 334 may include raw sensor data or pre-processed and post-processed information, as well as other data types, depending on the processing capabilities of the original device.

“The host platform analyzes digital representation 334 to identify one or more elements 390 in the scene as a target item. One or more elements may be recognized as identified elements by elements 390A through 390B. Recognizing elements 390 includes distinguishing between target objects, identifying an item as a particular object (e.g. a car or a specific vehicle), and interpreting the object (e.g. optical character recognition, logos bar codes, symbols, and so on). ), or making any other determination that element 390 corresponds to a target object, at least within a certain confidence level.

The target object may not correspond with a particular element in the scene. Element 390A could be a particular person’s face. The target object, however, is a generic object representing a face. In some cases, element 390A may be identified as more than one target object. As a continuation of the previous example element 390A can be identified as a hierarchy consisting of multiple objects, such as a male object (human), a female object (eye), an iris object and an identification object for iris. FIG. FIG.

“Preferably, the hosting platform recognizes at minimum one element in the scene. For example, element 390A as a target object. Other elements 390, which are not real-world elements, can be recognised as target objects. Element 390B, for example, could be a virtual object that has had its image captured in digital representation 334. Element 390B can also be an object that is beyond human perception, such as radio waves, network traffic congestion, network congestion, and other objects.

“The hosting platform analyses one or more recognized elements 390 in order to determine a context 322. This applies to the identified target object. When determining which context 332A or 332B is most relevant to a scene, multiple factors are involved. The context 332A example shows an attribute signature that indicates when context 332A is most likely to be applicable to a scene. The signature can be used to compare attributes of 390A recognized elements. If the attributes of recognized elements 390A are sufficiently similar to the signature, then the context 332A can also be considered to relate to at least the identified target objects and the scene as represented digitally by 334. The FIG. While context 332A is the only context that pertains to recognized element 390A in FIG. 3, it’s important to recognize that context 332A could be one of many contexts. Based on all information available digitally, 332 could also refer to recognized target objects or scenes.

“Contexts 332 may be created automatically or a priori. An entity (or AR object publisher) can define the appropriate context by using a context definition interface (332). To create a context object, the entity can input context attributes, signatures or any other information. A context can also be generated by the hosting platform. An example: If elements 390A are identified in digital representation 334, an individual can instruct the platform to convert those elements into a context signature.

“Context 332A may also include an attribute vector labeled PV. This represents attributes or properties 390 of scene elements that are relevant to the context in relation to determining interference between elements 390, whether recognized or not. The attribute vector may be subtle and not to be confused with context attribute signature. The attribute vector is a data structure that contains relevant attributes. The attribute vector can also be used as element selection criteria to deriving interference. It is important to remember that while a 390A recognized element might satisfy a context signature, it might not be aligned with the context’s attributes vector. A person could be identified as an AR player by being recognized as a target item. The person might be able to help determine the gaming context that applies to them or to the scene. The player may not have to identify interference between other gaming objects in the scene.

“Context attribute vectors may also be aligned within one or more normalized namesp aces. Each member of the vector may include an attribute name, and possibly values. Attributes can be considered multi-valued objects. To facilitate the mapping of elements and contexts between them, it is advantageous to use a common namespace.

“Context 332A may also include an interference function labeled F1. This describes how elements 390 in a scene interact with one another with respect to one additional element attribute to increase or decrease the presence of AR objects. The function of the AR objects 342 and the attribute vector is the preferred function of the interference function. AR objects 324 are AR objects that can be used to participate in the scene and also to be associated with context 332. As discussed above, AR objects 342 are identified by comparing the object attributes of AR objects 324 with context attributes of context 332.

“Element attributes, AR objects attributes, and other attributes that cause interference can include a variety of attributes. The interference of electromagnetic waves can be further extended by using the metaphor of interference. It depends on many factors (e.g. frequency, phase, amplitude, etc.). At a particular point in space. Element 390 may also be used to create interference based upon locations in more preferred embodiments. Digital representation 334 may include location information, such as triangulation, GPS coordinates and relative location to elements 390 in a scene. where the interference function may depend on location. The location information can refer to or reflect the actual physical location of elements in the real world. The interference between elements 390 may also be dependent on time. Digital representation 334 includes time information related to elements 390 and the scene in general.

“The interference function can also be used to create derived interference 350. The auto-generated interference criteria of derived interference 350 is used to determine which AR objects 342 are relevant to context 346 and how context relevant AR objects should be present in an augmented reality experience that uses interference among the elements 390. The interference criteria of derived interference 350 can be referred to as interference criteria that are derived from element properties (390, i.e. attribute values). The AR objects relevant to context 346 are those AR objects that meet the interference criteria. The interference function can be described as a summation of elements properties. This is similar to electronic magnetic wave interference. It can be calculated by adding amplitude and taking into consideration various properties (e.g. phase, time, frequency, location, etc.). In the simplistic example presented, the interference criteria are derived on an attribute-by-attribute basis by summing over the values over corresponding attributes of scene elements 390. A sum of all the attributes from all elements can yield a first criteria for a particular attribute. A member of the context relevant AR objects 346 is one that has at least one AR object 342 with attribute values that satisfy the interference criteria.

Although the example of derived interfere 350 is based upon a simple sum, it’s possible for the interference function to be arbitrarily complicated. The resulting interference function should yield an object satisfaction level in relation to the interference criteria. The satisfaction level is a measure of the presence of each AR object 346 in the augmented world. To accurately reflect context 332A’s utility, a satisfaction level can be calculated using a desired algorithm. A satisfaction level may be calculated based on a variety of factors, including optional conditions, requirements, and other interference criteria. A normalized measure of the extent to which object attributes exceed or fall below certain criteria thresholds or other algorithms. This satisfaction level can be used for instructing a remote AR-capable device how to interact with AR objects relevant 346.

The above description assumes that elements in a scene may interfere with one another to affect interactions with AR objects 346. The interference can be calculated based on all elements 390A in the scene, a part of the elements identified within the scene or a single element. The elements may also include AR objects 346 that an individual can interact with. Relevant AR objects 346 may also contribute to interference and effect their presence in much the same way that two interfering electromagnetic wave can cause their combined effect (e.g. interference patterns, amplitudes etc ).

Digital representation 334 can reflect the changing circumstances in a scene. Digital representation 334 can change over time, even in real-time. This means that contexts 332 may also change. Changes in contexts can cause derived interference 350 (time-dependent interference) to change, which can then change the AR objects 346. These changes can be transmitted back to AR-capable remote devices. Relevant AR objects 346 may have some degree of temporal persistence in some embodiments to allow smooth transitions between context states. Relevant AR objects 346 could be present and remain relevant even after the context 332 has been removed. A context 332 may have to be relevant for a specific time before AR objects 346 can be presented. This is a good approach for usability.

“While the discussion above discusses generating derived interfere 350 based upon contexts 332, it is important to note that derived interference 350 can be affected by changes between contexts 332. Contexts 332 can change with the scene, and even shift focus. From a first context to another context. One aspect of the inventive subject matter includes configuring AR-capable device to allow interaction with context relevant AR objects 346. This is based on context changes 332, but preferably as determined from derived interference 350. An individual may participate in an augmented reality experience that is associated with a gaming environment. The augmented reality experience may also include a shopping context if the individual chooses to buy a product. The shift in focus between a gaming and shopping context can adjust Derived Interference 350 to provide additional AR content. Alternately, the individual’s AR experience might not be affected by a shift of focus from a gaming environment to a travelling one. A shift in focus could mean that a context 332 is retained and not discarded in favor of a new 332 context. The degree to which a context 332 is related to a scene can be used to measure context focus.

“Interference-Based Presentation”

“FIG. “FIG. Both the 410A and 405B mobile devices capture digital representations of scenes with real-world elements 490. An AR hosting platform recognizes elements 490 and determines relevant AR objects that are related to the context 490. Relevant AR objects 446A, 446B are members of the relevant AR objects set.

“The AR hosting platform generates an AR interference based upon elements 490 to determine whether they destructively or constructively interfere with respect to a particular context. Mobile device 410A’s relevant AR object 446A is more prominent due to constructive interference between elements 490. Relevant AR object 446A has an enhanced presence due to constructive interference between elements 490. This indicates that it is likely to have a high satisfaction level in relation to interference criteria. Mobile device 410B captures a digital representation of the same scene with elements 490. The context indicates that relevant AR object 446B suffers from suppressed presence because of destructive interference between elements 490. Relevant AR object 446B has been negatively or weakly affected by elements 490, and is likely to have a low or negative satisfaction with regard to the interference criteria.

“One should also note that AR object 446B may be relevant AR item 446A. Based on relative satisfaction levels, however, different augmented reality experiences are provided by mobile devices. One possible reason for this could be user identification. This is where the digital representation of the scene altering its contexts incorporates the user’s information.

“Enhanced presence or suppressed presence” can take many forms, depending on the nature and context of the AR objects 446A, 446B and other factors related to the scene. Presence could be defined as the presence of relevant AR objects (enhanced or suppressed). However, presence can encompass a wide range of experiences. Take a look at the visual image of AR object 446A. You can overlay the visual image on an image of the scene that is opaque or covers elements 490. The relevant AR object 446B’s visual image might be transparent to indicate a suppressed present. Audio content in relevant AR objects 446A or 446B can also be played depending on the volume levels determined from each object’s interference criteria satisfaction level. The AR-capable devices’ presentation capabilities will determine if the presence of AR objects 446A and 446B can be enhanced or decreased for any human sense modality.

“Presence can extend beyond the human sense modalities. Functionality associated with AR objects 446A and 446B can be affected by enhanced or suppressed presence. The interaction between mobile devices 410A or 410B and relevant AR objects 446A and 446B can be controlled based on satisfaction levels. You might turn on certain features of relevant AR objects 446A, or make them available. However, features of relevant AR object 446,B could be disabled or turned off.

“Use Case”: Gaming and Promotional Contexts

“FIG. “FIG.5” illustrates an example of a use case for the inventive subject matter disclosed in relation to gaming and advertising. Multiple people use their mobile phones as mobile devices 510 in this example to compete for promotional offers on products in a store. Each mobile device 510 captures data associated with scene 595. This includes the products in the store. Each device sends its own digital representation 534 to the hosting platform 500. This may be captured using their own sensors 530. Digital representations 534 may include images of products and information about the mobile device (e.g. device identification, location, orientation or position). Other information, such as user identification and other information. The object recognition engine 560 uses digital representation 534 to identify the products in the store and recognize them as target objects represented by elements 590. Elements 590 represent known target objects that correspond to products in the store. The contexts 532 which relate to scene 595 are used by object recognition engine 560 to derive interference 550. Contexts 532 can include a shopping context or a store-specific context. Object recognition engine 560 uses interference 550 to identify relevant AR objects 546 among AR objects available in AR object repository 542. Platform 500 allows mobile device 510A interaction with AR objects 546A.

FIG. FIG. 5 is an example of an augmented world where people work together to create the environment around them. When a group of friends, acquaintances, or related people converges on a store, it can cause a number of promotions to be impeded (e.g. sales, coupons prizes incentives discounts). Members of opposing teams can block the availability of promotions to their team, while increasing the availability of promotions for their team. Mobile device 510A has been a member of a winning team and won a coupon for a product. The team is still short of sufficient members to be eligible for another promotion. They are asked to invite more friends. The AR object message bubble for the second product has been suppressed (i.e. it is not available to promote) because of interference from opposing team members.

“For the simplified example of FIG. 5. Interference 550 can be described by the relative number of team members: Team A members minus Team B members. If the value is high, Team A wins. Team A wins when the value is high. Team B wins when it is low. Platform 500 analyzes AR object attributes based on team members. It then instructs mobile device510A to allow interaction with AR objects 546A based on interference 550, as determined by each object’s satisfaction level. The presence of AR objects 546A may change as team members enter or leave a scene.

“The promotion and gaming use case is one example of context derivation. A medical context in which medical equipment is present (e.g. X-Rays, MRIs, dentist chairs, etc.) could be another similar example. A doctor, patient and the AR medical object should be available. A patient may enter a doctor’s practice with a doctor present. The doctor can use a tablet or pad-based computing device such as an iPad?, Xoom??, PlayBook?. etc.). If the doctor and patient are not together, they can constructively interfer with patient’s AR-based medical record on the pad. If the patient and doctor are present with others, they can interfere destructively with AR-based medical records.

“Use Case”: Object-Based Message Boards

“FIG. “FIG.6 illustrates another use case for hosting platform 600, which allows annotation of specific objects in scene 695. Mobile devices 610 can be used by multiple people to capture digital representations of scene 695 using sensors 630. As indicated by the geometric shapes, there are multiple objects in this scene. Platform 600 attempts recognition of the objects and context 695. Platform 600 then identifies the AR objects 646 that are relevant to the objects. AR object 646A is a message board with messages. The messages and board are bound to the AR objects. It is important to note that messages can be bound to any object other than the particular object.

“One must appreciate that messages displayed on the message board are available based upon scene 695 context and interference between elements of the scene. A personal message was attached to the object in the real world by the owner of the mobile device 610A. The device owner and object are present together so that the personal message can be presented. The message board also lists messages that are directed at the general public. AR objects 646A can be accessed via message exchange from mobile devices and other AR-capable devices. This is even though AR object 646A refers to a message that has been bound to a real-world item. AR objects 646A can be presented as messages, or a message board bound with a particular element recognized as a target object. As previously discussed, AR objects 646 may include products that can be purchased, promotions (e.g. coupons, prizes and incentives, sales discounts, etc. ), content (e.g., images, video, audio, etc. A reward for achievement, a token or a clue to a game or puzzle, unlocked augmented reality experiences, application data, reviews or any other type of content.

“Interactions”

Once the AR hosting platform has identified the relevant AR objects for a particular context, the platform will instruct the mobile device or other AR-capable device to allow it to interact with the AR objects. Hyper-linked AR objects are possible in some embodiments. This allows a user to click or select an AR object and then the device will access additional information via the network. The AR objects may also include instructions that can be copied to the tangible memory of the device. Instructions instruct the mobile device how to interact with the AR object and remote computing devices. Interactions may include many possible interactions between and among AR-capable devices or AR objects.

“One of the most interesting interactions is allowing an AR-capable device participate in a transaction with a commerce engines. A commercial transaction could include selecting, purchasing, or other monetizing interactions between member objects of the relevant AR objects. Interaction with one or more online accounts can be part of a commercial transaction. These AR objects of interest may contain instructions or other information that allow the device and remote servers to communicate with each other. This eliminates the need for the device to be familiar with financial protocols in order to complete the transaction. Other examples of commercial transactions include: interacting infrequently flyer miles programs; exchanging virtual currencies in an online word; transferring funds from one account to the other; paying tolls and interacting with a point?of-sales device. Paying utilities, taxes and other interactions that have monetary value.

“Managing AR objects is another type of interaction that has been considered. AR content consumers would be people with cell phones in more common embodiments. AR content consumers need to have many different needs in order to manage AR objects. An AR object hub can be used by a cell phone to allow users to interact with others nearby phones and create augmented reality experiences. An AR-capable phone or another device can also be used as an AR object hub. This allows the phone to store or protect permanent AR objects that are bound to the person or their cell phone. Management includes monitoring AR objects and granting authorization. It also includes establishing alerts or notification.

Management interactions may also be applicable to publishers or creators of augmented reality content. AR-capable devices can be used as content creator utilities. Interactions may also include allowing AR-capable device to interact with AR objects. AR-capable devices can be used by publishers, including their own mobile phones, to interact and create AR objects. This includes creating AR objects from elements in a scene, populating scenes with AR objects, managing revisions or version of AR objects, debugging AR object creations, publishing AR objects for consumption, binding AR objects to real-world elements or AR objects, and establishing interferences functions to contexts or scenes.

“It should be obvious to those skilled in art that many additional modifications beyond those already described can be made without departing from these inventive concepts. Therefore, the inventive subject matter is not to be limited except within the scope of the attached claims. Furthermore, both claims and specifications should be read in a broadest way consistent with their context. The terms “comprises” and “comprising” should be interpreted in accordance with the context. Particularly, the terms?comprises? and?comprising are to be understood as referring to elements, components, or steps. should be understood to refer to elements, parts, or actions in a non-exclusive way. This means that elements, components or steps can be present, utilized, combined with other elements or components, and steps that aren’t explicitly referenced. The specification claims must refer to at least one item from the group consisting A, B and C. . . and N, it is important to understand that the text requires only one element of the group. Not A plus N or B plus N, but just one element.

Summary for “Interference-based augmented reality hosting platform”

Augmented reality is a combination of real-world and virtual objects. The reality designers have set the rules for how individuals can interact and experience augmented realities. Individuals can access augmented reality content using their mobile phones, mobile computing platforms or other AR-capable devices. As Augmented Reality continues to invade every day life, the number of augmented reality content is growing at an alarming pace. The growing amount of augmented reality content is overwhelming.

“BUMP.com is one such augmented reality service. BUMP.com allows you to access annotations tied to individual license plates, as described in Wall Street Journal. web article titled “License to Pray?”, published Mar. 10, 2011. BUMP.com allows users to upload images of license plates to BUMP.com. The service then attempts to identify the license plate and returns annotations made by others. To interact with the content, users must have a dedicated application. BUMP.com supports only providing access to the content through their application.

“Layar? The Netherlands’ Amsterdam makes further progress in the presentation of augmented reality. It offers access to multiple layers that each layer is separate or distinct from the others. The user can choose which layer contains layers published by third-party developers. Layar allows users to choose content from multiple third-party developers, but the Layar application requires that the user select a layer. The user is only presented with one purpose content and does not experience augmented reality the same way as the real-world. Users should be able seamlessly to access and interact with augmented reality content in the future world of ever-present, augmented reality.

“From the perspective to present augmented reality context to some extent U.S. Pat. No. No. 7,899 915 to Reisman, titled “Methods and Apparatus For Browsing Using Multiple Coordinated Device Sets?”, filed May 8, 2003. It recognizes that multiple devices may be used by users. Reisman’s method allows users to switch between display and presentation devices while interacting with hypermedia. Reisman’s approach is purely for the user. He fails to recognize that the underlying augmented reality infrastructure and interference between elements in a scene can also impact the user’s experience.

“U.S. Pat. No. No. 7,904,577 to Taylor, titled “Data Transmission Protocol and Visual Display For a Networked Computer System?”, filed Mar. 31, 2008 provides support for virtual reality gaming via a protocol that supports multiple players. U.S. Pat. No. No. 7,908,462 was filed by Sung under the title?Virtual World Simulators and Methods Utilizing Parallel Processors and Computer Program Products Thereof?. 9 2010, envisions hosting a virtual workshop on parallel processing arrays of field-programmable gates arrays or graphic processors. The infrastructures are not only intended to provide infrastructure, but the user must interact with an augmented reality system.

These and all other extrinsic material discussed herein are incorporated in their entirety by reference. If a definition or usage of a term within an incorporated reference is inconsistent with the definition provided herein, that term’s definition will apply and the reference definition will not apply.

“Unless the context indicates otherwise, all ranges herein should be understood to include their endpoints. Open-ended ranges should also be considered to include practical commercial values. All lists of values should include intermediate values, unless the context otherwise indicates.

“Strangely, existing approaches to providing augmented reality content treat augment realities platforms as silos or objects. Each company creates its own hosting infrastructure to offer augmented reality services to their users. These approaches do not allow users to seamlessly move from one augmented world to the next as easily as they would in a room within a building. Existing infrastructures do not treat augmented reality objects in a separate manageable manner. However, an augmented reality infrastructure could also be pervasive. In the developed world, electricity is widespread or, more accurately, internet connectivity is universal. Similar treatment would be beneficial for Augmented Reality.”

“In a world with ubiquitous augmented realities, or associated augmented objects, where individuals interact seamlessly with the augmented realities in a seamless fashion. Individuals still need to present relevant augmented truth content, especially when features, virtual or real, can interfere with one another. The same referents that present information based upon a context fail to address interference between augmented realities or elements real and virtual. The best known art simply forces individuals to choose which elements they want to experience. This is a way to avoid interfering with other elements in the augmented reality. It is not clear from the known art that elements can interfere with each other based on their properties and attributes. Interference can be more than a filtering mechanism. Interference is the ambient interaction between present or relevant elements that creates an augmented reality experience. It can be either constructive or destructive interference.

“One or more augmented reality can be hosted by the same hosting infrastructure, such as the networking infrastructure. Or, augmented reality objects can be independent from the hosting platform. The Applicant, for example, has realized that networking nodes within a network fabric can provide augmented realities objects or other virtual constructs (e.g., cell phone, kiosks tablet computers, vehicles) to edge AR-capable device (e.g., smartphones, tablets, laptops, tablet computers etc.). The fabric can use data exchange to determine the most relevant augmented reality objects or which augmented reality is relevant for the device. This information is derived from real-world elements and the interaction between edge devices or other devices. Augmented reality context is now able to be used to determine how elements within a scene (or a location) can interact with one another to create relevant augmented reality experiences.

“There is still a need to use interference-based augmented reality platforms.”

“The inventive subject matter includes apparatus, systems, and methods that allow one to use an AR hosting platform to create an augmented experience. This is based on interference between elements of a digital representation. An AR hosting platform is one aspect of the inventive subject. A mobile device interface is used to create a virtual representation of a scene. This can be local to the device, such as a cell phone, vehicle or tablet computer. A digital representation may include data representing one or several elements of the scene. The data may include sensor data from the mobile device, other devices that are close to the scene or devices capable of recording data related to the scene. A platform may also include an object recognition engine that communicates with the mobile device interface. It can analyze the digital representation and recognize elements in the scene as target objects. Based on the digital representation, the object recognition engine can also determine the context of the scene and the target object. The engine can also identify relevant AR objects from the available AR objects based on a combination of elements (real-world elements, virtual elements etc.). The preferred embodiments of the engine can identify AR objects from available AR objects using a derived interference. This creates criteria that allow an AR experience to be presented to the user via their mobile device. An object recognition engine can configure remote devices that allow interaction with an AR object in the relevant set according to the derived interfere. The interaction can involve participating in a transaction with a commerce engine, which is a preferred embodiment. An individual could purchase the member object, or any real-world object that is part of the augmented reality.

“Various objects, features and aspects of the inventive subject matter will be more evident from the following detailed description, together with the accompanying drawings figures, in which like numerals signify like components.”

It is important to note that although the description above is based on a computer/server-based augmented reality platform (or server), there are many other configurations. These may include servers, interfaces systems, databases, agents and peers as well as engines, controllers, controllers, servers, platforms, database, clients, engine, controllers or any other type of computing device operating either individually or collectively. The computing devices include a processor that executes software instructions stored on tangible, non-transitory computer-readable storage media (e.g. hard drive, solid drive, RAM and flash, etc.). Software instructions should be used to configure the computing device to perform the functions, roles, and responsibilities described below. Particularly preferred embodiments allow the different servers, systems and databases to exchange data using standard protocols or algorithms. These protocols may be based on HTTPS, AES or public-private keys exchanges, web APIs, financial transaction protocols, or any other electronic information exchanging method. Data exchanges are preferred to be conducted over a packet switched network, such as the Internet, LAN or WAN, VPN, and other types of packet switched networks.

The disclosed techniques have many technical benefits, including the provision of an AR hosting infrastructure that allows remote devices to interact with AR object objects. The infrastructures can be used to determine the context of an AR-capable device’s interaction with AR objects.

The following discussion will provide many examples of the inventive subject matter. Each embodiment is a single combination or combination of inventive elements. However, the inventive subject matters includes all combinations of the disclosed inventive components. If one embodiment contains inventive elements A,B, and and a second embodiment includes inventive elements D and B, then the inventive matter also includes any remaining combinations of A and B, C, C or D even if they are not specifically disclosed.

“As used in this document, and unless the context requires otherwise, the term “coupled to” is defined as follows: Direct coupling is when two elements are connected to one another. In indirect coupling, at least one element is found between the elements. The terms “coupled to” and “coupled with” are used interchangeably. The terms?coupled to? and?coupled? are used interchangeably. are synonymous.”

“Overview”

“AR object interference” can be interpreted as mirroring, or otherwise simulating interference between electromagnetic waves, such as light. Interference between waves is when two or more waves interact at one location or time in a way that enhances their presence (i.e. constructive interference) or suppresses their presence at that location (i.e. destructive interference). Interference between waves can be caused by the interrelated properties of waves, such as amplitude and phase. This metaphor of interference can also be applied to augmented realities, where elements that participate in an AR object can have properties that cause them to interfere with one another to enhance or suppress their presence.

“The following discussion focuses on the inventive subject matter in the context of networking nodes and a network fabric as a whole that acts as an AR hosting platform. However, it is important to remember that interference can also apply to traditional server implementations. Servers can operate on dedicated hardware, or within the network cloud.

“In FIG. “In FIG. AR ecosystem 100 consists of a network fabric 115 made up of several interconnected networking nodes 120 that form a communication fabric. This allows edge devices 180 to exchange information across the fabric. 115 Fabric 115 may also include AR object repositories 140 that store AR objects 142. These databases are preferably network-addressable AR objects. AR-capable devices 110 can interact with fabric 115 by exchanging device data. This data may include data that is representative of a local environment or scene. The device data could include a digital representation if a real-world scene. This digital representation can be comprised of sensor data, raw and preprocessed, acquired locally by sensors 130 or acquired from other sensor-enabled devices. Digital representation, for example, can include sensor data obtained by a mobile device (e.g. cell phone, tablet computer etc.). Or even multiple mobile devices. An AR device interface can be included in the infrastructure (e.g. port, API and networking node), Fabric 115 can exchange device information with AR-capable devices 110 through this interface. Networking nodes 120 can, based on exchanged device data or other environment data, derive an AR object address 142 and present one or several AR objects 142 at the AR-capable 110. AR-capable devices 110 are preferably able to interact with AR objects142, including conducting a transaction with commerce engine190 (e.g. banking system, credit card processing and frequent flyer miles exchange, reward programs etc ).

“AR-capable device 110” typically refers to one or more types 180 of edge devices relative to networking fabric. 115. Example AR-capable devices 110 include mobile devices, cell phones, gaming consoles, kiosks, vehicles (e.g., car, plane, bus, etc. ), set top boxes, portable computers, and other computing devices that can present augmented content to users. Augmented content should be able to be presented in accordance with a user’s sense modalities (e.g. visual, audio and tactile, tastes and olfactory). To compensate for a disability, the augmented content may be converted to the user?s sense modalities. Visual AR objects 142, for example, can be presented to visually impaired people via a tactile presentation interface.

One or more sensors 130 can be used to acquire environment data within a reasonable distance of the user or the AR-capable device 110. The sensors 130 may include optical sensors and microphones as well as magnetometers, GPSs, thermometers, weather sensors, bio sensors, GPSs, magnetometers, GPSs, magneticometers, GPSs, temperature sensors, and other types of sensors. As shown, sensors 130 can be integrated with AR-capable devices 110. They could even be located far from the AR-capable devices 110. A satellite could include sensors 130 that capture data relevant to the AR-capable devices 110.

“In certain embodiments, sensors 130 collect data that is local to the user within a personal network (PAN), where the AR-capable devices 110 acts as a sensor hub. The sensor hub consolidates the sensor data and shares it with the networking fabric 115. To collect brain signals, sensors 130 could be worn as part of the user’s clothing, in their shoes, or in their head. Sensor 130 can communicate with other elements in the PAN via wired and wireless connections (e.g. Bluetooth, WiGIG. Wi-Fi. Zigbee). Examples of sensor data include medical data and position data as well as orientation data, biometric data, image, audio, video, audio, data about temperature, proximity, data about acceleration, and other data that can be captured by a sensor. The digital representation of the scene may also include data from bio-sensors or other health-related sensors. The data can be used to create a digital representation that represents a scene regardless of what data was collected. This digital representation can include raw data as well as processed data and metadata.

“Networking nodes 120 prefer to obtain environment data relevant to a real world scene of AR-capable devices 110. Environment data can contain a wide range of data that reflects the real-world environment. The environment data may include sensor data that is a digital representation or depiction of the scene in which the AR-capable device 110 acquired the sensor data. External data may also be included in the environment data, provided they are not obtained from AR-capable devices 110. The environment data could, for example, include data from a weather station or surveillance camera, another mobile phone, a webserver, a radio station, satellite, or any other source that can provide data about the environment.

“Digital representations of scenes can include environment data that extend beyond sensor data. AR data that reflects a current augmented reality can also be included in the environment data. The environment data might include information about the location of AR-capable devices 110 and virtual objects. The environment data could also include information about the operation of AR capable device 110. These include network metrics, user ID or demographics, installed firmware or software, and other types of environment information. The scene’s digital representation can be considered to include many aspects, such as the participants, the physical environment, and any other aspects beyond what the human eye can see (e.g. networking metrics

“Networking fabric 115” is a cloud of interconnected network nodes 120. This could be the Internet, or a cloud computing infrastructure such as Amazon EC2 (Google?, Rackspace?), etc. Fabric 115, a communication infrastructure that allows edge devices 180 to exchange information in a general-purpose fashion, is important. Fabric 115 can also be used to provide a platform for AR-capable devices 110 to present one or more AR objects. The network nodes 120 that make up network fabric 115 are preferably composed of computing devices capable of directing data traffic from one port to the other. Examples of networking nodes 120 are routers, hubs switches, gateways firewalls access points and other devices that can forward or route traffic. You can have a homogenous mixture or a heterogeneous combination of different node types in the fabric. The fabric may be extended into AR-capable devices 110 in some instances. This is possible when AR-capable device 110 acts as a sensor hub within an AR-capable device 110. Fabric 115 may also include one or more networks such as the Internet, a LAN or WAN, a VPN or a WLAN, peer to peer networks, cloud-based systems, ad hoc network, mesh networks or other types.

One or more AR object repository 140 that stores AR objects 142 is preferred. AR objects 142 may be stored as separate manageable objects that can be addressed from other nodes 120 or edge devices 180, commerce engines 190, AR-capable device 110, AR-capable devices 110, and even other AR objects. AR objects 142 should have at least one or more object attributes. These are metadata that represent information about the AR object 142. The object attributes may include information about object properties that could interfere with other properties in an AR experience context.

As desired, object attributes can be linked to AR objects 142. Some embodiments allow the object attributes to conform to one or several standardized namesspaces. This allows various network nodes 120 and agents, AR-capable device 110, or other components to compare one AR object with other objects in the system (e.g. contexts, AR objects or elements, etc.). A normalized namespace can be described as a global namespace which applies to all elements, including AR objects 142. It is possible to define object attributes for specific contexts. A gaming context might have its own namespace. This could be different from a shopping context or traveling context. Each type of context may have its own namespace or sub-namespaces, which could be used to identify the AR content publisher. While a first publisher might assign attributes to AR objects 142 based on their proprietary namespace, a second publisher might use a common, normalized gaming context.

“Contexts come in many forms and can be as specific or generalized as you wish. AR ecosystem 100 can treat contexts as manageable objects. A context object can be copied or moved from one node 120 to the next. Each node 120 can have access to the most relevant contexts by having context objects. Contexts can be given names, identifiers or other context attributes that represent metadata about the context or its usage. Examples of context attributes include context name, identifiers, and URLs. The context owner, context publisher, context author, context revision, and other information can all be included. A context object can also include an attribute signature that quantifies the context’s relevance to the scene or elements within the scene. The signature can be represented using criteria or rules that operate on attributes in normalized attribute namesspaces. Contexts may also be classified into one or more types of contexts. Examples of contexts include a gaming, shopping, traveling, and a work context (e.g. job, occupation, activity). You can also use them as entertainment contexts or in other categories.

AR objects 142 are able to remain at their repository 140 without the need for queries. Repositories 140 are able to distribute AR object attributes 142 amongst networking nodes 120. Networking nodes 120 can use such embodiments to compare AR object attributes derived using the digital representation of the scene and known AR object attributes in order to determine if AR object 142 is relevant to a particular context. To locate AR object 142, you can either obtain the address or derive it from the digital representation of the scene. We will discuss below the different methods of addressing AR objects.

“AR repositories 140″ are shown as separate databases within the networking fabric 115. It is possible to store AR objects 142 in separate databases. One publisher or vendor of AR objects 142 may wish to keep control over their objects. For a fee, the publisher can give access to their repository. It is possible to mix AR objects in a general-purpose repository. This allows AR objects 142 to be moved from one repository 140 or 120 to another, based on aggregated contexts derived from multiple scenes 110. One or more AR repositories 140 can be combined to form a distributed repository, where AR objects 142 are distributed across multiple components of the system. A single AR repository 140 could be spread across multiple memories 120. Each AR repository section can be addressed in the same address space, regardless of where they are located.”

“When AR object 142 is identified, the networking device can retrieve the object from the AR repository 140. AR object 142 can be made available to AR-capable devices 110 by the networking node 120. AR-capable devices 110 may be configured to interact with AR object142 according to their design, context or interference. The AR object ?+?,, which is the AR object in the example, can be presented on two AR-capable devices 110. This illustrates that AR objects can be shared and can be presented frequently because they have similar contexts. Perhaps the AR objects are shared by AR players who share a game or AR experience. Another AR object,?k, is displayed on an AR-capable 110 device to show that it could have its own context. This could be based on user identity, preferences, authorization, authentication and interference among other elements of AR ecosystem 100 or other attributes.

“Networking nodes 120 enable AR-capable device 110 to interact with AR object 142. The interaction may include a presentation of AR object 142 via a display or speaker, tactile interface, or any other interface, depending on the nature of AR object 14.2. AR objects 142 may also contain executable instructions, which can be executed using the AR-capable devices 110 and 120. Instructions can be used to represent functionality of AR object 142. A person could be near a vending machine, for example. AR object 142, a corresponding AR object, is presented to the user as a product that can be purchased. The networking node houses the functionality for conducting transactions associated with the vending machines. AR object 142 could contain the transaction functionality. This AR object can be associated with AR capable device 110 or remote servers or services or any other suitability-configured device. The code or AR object 422, based on new contexts or changes to existing contexts, can be removed by the networking node once the person has left the area.

AR objects 142 are shown in a visual format. However, it is important to understand that AR objects can also include audio formats or other formats compatible with the human senses. AR object 142 may also represent objects that are not accessible to the human senses. This is because the object’s features can be converted to correspond with the human senses. AR object 142, for example, could instruct AR-capable 110 to display a non-visible temperature gradient overlaid on a real world image of a landscape. The temperature contours were derived from an array 130 of sensors, which may be within AR-capable 110, or close to the landscape.

“Hosting Platform”

“In FIG. “In FIG. 2, an example hosting platform 200 is shown. Hosting platform 200 is shown as a network switch, but 200 could be used to host other types of infrastructure. Servers, routers and hubs can all be adapted to the inventive subject matter. These include name servers, name servers, proxy servers, access points, hot spot, and other devices that are capable of operating in a network environment as a computing device. Hosting platform 200, in preferred embodiments, is a network device capable of receiving packets and forwarding them to their destination. This applies regardless of whether the packets are associated to an augmented reality. The inventive subject matter is also applicable to traditional computing devices such as servers, clients and peers, handheld gaming devices, and other types of computing devices.

Hosting platform 200 includes a device interface 215. This interface allows hosting platform 200 (or the fabric) to interface with AR-capable devices. Device interface 215 may include one or more ports located on the networking switch in embodiments of hosting platform 200. These ports may include wired ports, such as Ethernet, optical fiber, serial and USB, Firewire or HiGig, SerDes, PCI, XAUI, etc. Other ports that require a physical connection. A port can be wired, but it does not have to be connected directly to the network node. One or more wireless ports may also be included in the ports (e.g. WUSB 802.11, WiGIG WiMAX GSM CDMA LTE UWB, near field radio, laser, Zigbee). One or more logical ports could be included in device interface 215 to allow for AR-related URLs or APIs hosted within the network node or cloud. A hosting platform 200 can allow AR-capable devices to access AR features.

“Memory230 can store one or more contexts, 232 representing known scenarios that are relevant to AR objects 242. Contexts 232 can also be considered manageable objects. They have context attribute signatures that describe criteria that must be met for a particular context 232. Hosting platform 200 can analyse the digital representation of the scene and generate attributes that correspond to recognized elements within the scene. The U.S. Patent Application Publication 2010/0257252 to Dougherty, titled “Augmented Reality Cloud Computing?”, filed April. 1, 2009, and other context-based references previously cited.”

“Hosting platform can also include object recognition engine 260, which can function as a Object Recognition-by-Context Service (ORCS) capable of recognizing real-world elements of a scene as target objects based on the digital representation of the scene. An AR-capable device or other sensing devices can provide digital data to create a digital representation of a scene that includes one or more real elements. Digital data can be used to represent the scene, including image data and medical data as well as position, orientation, haptic, biometric, and other data.

“Object recognition engine 262 can use one or more algorithms for recognizing elements in a scene. Engine 260 will recognize elements as target objects if it is using ORCS. The elements in a scene can be real-world or virtual. The elements may have derived attributes that were obtained through analysis of the digital representation, or attributes that were obtained from the target objects. In some instances, object attributes 244 may include target attributes that are associated with known target objects (e.g. buildings, plants or people). Examples attributes could include the features of objects in the scene, such as color, shape and size, speech modulations, words or iris. Other features, such as frequencies, image resolution, and so forth. Other types of features. Acceptable algorithms include SIFT, SURF and ViPR. The U.S. Pat. describes acceptable techniques for processing data to identify target items. Nos. 7,016,532 ; 7,477/780 ; 7,680 324 ; 7,565,008 ; and 7,564,469

“Object recognition engine 262 can use environment attributes (e.g. known target object attributes and derived attributes etc.). To recognize one or more objects in a real-world scene. Once a target object has been identified, the object information associated to it can be used to determine whether any contexts 232 relate to the target. To make this determination, the context attribute signature can be used to compare the attributes of the target object and other attributes of environment. When the different types of objects are compared (e.g. AR objects, contexts elements, target objects and interferences), it is possible to make the determination. The system can perform a lookup if the attributes names are aligned. A comparison can also be used to verify that the context attribute signatures have been satisfied. The satisfaction can be based upon the values of the attributes in relation to the requirements or conditions of the signatures. A gaming context may require that at least one of the players be recognized in order to be considered to be part of a scene.

“Attributes for the target object can match with context 232. A real-world element, such as a person or vending machine, kiosk, sign, etc. could be used. A game goal might be one example. The object recognition engine 265 recognizes the game goal when it is captured or sensed. This causes the platform 200 to tell the AR-capable device 242 to give the AR object 242 for a reward. Real-world elements can then be associated with the corresponding context 232 using derived environment attributes and AR object attributes. Once one or more contexts are determined to be relevant to recognized elements, the object recognition engine 265 can scan through AR objects 242 in order to determine which AR objects refer to AR actualities 242. It is possible that AR objects 242 could be considered to relate to contexts 232. AR reality is meant to communicate the meaning of all AR objects or augmented realities that are currently available.

“There are several important points to be aware of. AR repository 240 may contain a large number of AR objects 242 which could be accessed using different contexts 232. The AR object 242 that is available represents only a small subset of AR objects 242 in AR reality. The available AR objects 242 can be viewed as a fraction of total AR objects 242, which are accessible through authorized access to contexts 232, assuming that authentication is done properly. Further, the relevant AR objects 242 represent a fraction of all available AR objects 242. The context-related objects 242 make up the set of relevant AR objects 242. It is important to understand that member objects in the relevant AR objects 242 may or might not be presented individually. The set’s member objects are presented in a derived interference between elements of the scene.

“One should remember that AR object 242 is not stored in memory 230. Memory 230 could store AR object attributes 244. Platform 200 can determine which AR objects 242, or none, are relevant to context 232 that relates to the AR-capable devices current environment. Platform 200 can then access the AR Object Addressing Agent 220 (AOAA 225), which can determine the address of AR objects. Remote nodes may also be affected.

An AR object address can be derived by many methods using “AOAA 220.” If authorization or authentication is granted, then the AR object attributes 244 may include the address from which the AR object 242 could be retrieved. This is a good option when platform 200 memory requirements are higher. The AOAA 220 is shown as part of platform 200. However, the functionality of AOAA 220 or the object recognition engines 260 can be found in other devices, networking nodes and AR-capable devices (e.g. mobile phones, vehicles, etc.). ), or any other components of the networking fabric.

“AOAA 220 also has another method of obtaining an AR object address. This involves converting at most some attributes (e.g. environment attributes, derived attribute, target object attributes and context attributes) to a minimum. Directly into an address within an area. The derived attributes from the environment data can be quantified, and then converted into a vector of attributes. This could be based on standardized namespaces, as we have discussed before. To generate a hash value, the vector is run through a deterministic operation, such as a hash function. The hash space represents an address space. An address can be assigned to the networking nodes, AR objects 242, contexts 232, or other items within the ecosystem’s hash space. AR objects 242 may be stored on nodes with addresses close to the AR object’s address. The address is created by the networking node. A request for an AR object is sent to a neighboring network node with a closer address to the AR object’s address. These addressing methods are similar to those used in peer-to?peer file sharing protocols. A part of the inventive subject matter includes applying distributed addressing techniques for AR objects within a networking infrastructure environment.

By using different functions to create an address, “Platform 200” can be used to distinguish between augmented realities. To derive an address that represents an augmented reality, a first function could be used, such as a hash, or another type of function. To generate an AR object address, a second function can be applied. The first portion of some embodiments could be a prefix (e.g., domain name, DOI prefix), etc. The second portion is a suffix (e.g. URL address, DOI suffix etc.). A context can also be represented by the address scheme, prefix, suffix or any other extension. An address for an AR object may have the form ‘www. .com/ / ? where each set is of angle brackets (??>? Each set of angle brackets (? >) indicates a part of an AR object address in multiple parts.

Another way to address could be to convert environment attributes or other attributes into network addresses where AR object 242 is found. These attributes could be used to index into a lookup list shared by all nodes that have AR objects 242 and the corresponding addresses. A network address can be either a domain name, URL or hash space.

“No matter what addressing scheme you use, AR object addresses generated using the AOAA 220 point at a location for a corresponding AR objects 242 in one of the AR repositories 240. This is true even if the repositories or objects are not located on the hosting platform 200. The AR object address can be obtained directly or indirectly from real-world objects that are recognized as target objects, context 232 or other elements with attribute information, as discussed above. Examples of AR object addresses include a domain name and URL. They can also contain an IP address, a GuiD, a GUID, or any other type of address. Each AR object 242 may be assigned an IP address in some embodiments (e.g. IPv4, IPv6, etc.). Directly addressable via one or more protocols, such as DNS, HTTP FTP SSL SSL SSH etc. Each AR object 242 could, for example, have an IP address in an IPv6 environment. It could also have its domain name and the corresponding URLs. AR objects 242 in such embodiments can be found using known techniques, including DNS, name servers or other address resolution methods.

Hosting platform 200 can be viewed as a networking node or server. However, it is important to understand that the functionality of hosting platforms 200, as represented by its components, can be integrated into AR-capable device (e.g. mobile devices, tablets and cell phones). An example is that a cell phone can be set up with one or more modules, such as software instructions, hardware, and so on. Offering capabilities such as AOAA 220 and object recognition engine 260, device interfacing 215, and other capabilities. Device interface 215 could be described as a set APIs that allow the phone to exchange data with the hosting platform 200 and its components. Other embodiments allow AR-capable devices to share the roles and responsibilities of hosting platform 200 with other devices. A cell phone could use a local object recognition engine (260) to recognize common objects such as faces, money, bar codes, and so on. A digital representation of the scene can be transmitted to a more powerful remote object recognition engine (260), which can recognize specific objects such as a person’s face or context .

“Element Interference”

“FIG. “FIG. An AR hosting platform obtains digital representation 334, preferably via a device interface. Digital representation 334 is data that represents at least a portion of a real world scene. Real-world elements can be elements that include objects, real-world elements, or AR objects. One or more remote sensors or devices can capture scene-related data, as described previously. Digital representation 334 is possible. Digital representation 334 may include raw sensor data or pre-processed and post-processed information, as well as other data types, depending on the processing capabilities of the original device.

“The host platform analyzes digital representation 334 to identify one or more elements 390 in the scene as a target item. One or more elements may be recognized as identified elements by elements 390A through 390B. Recognizing elements 390 includes distinguishing between target objects, identifying an item as a particular object (e.g. a car or a specific vehicle), and interpreting the object (e.g. optical character recognition, logos bar codes, symbols, and so on). ), or making any other determination that element 390 corresponds to a target object, at least within a certain confidence level.

The target object may not correspond with a particular element in the scene. Element 390A could be a particular person’s face. The target object, however, is a generic object representing a face. In some cases, element 390A may be identified as more than one target object. As a continuation of the previous example element 390A can be identified as a hierarchy consisting of multiple objects, such as a male object (human), a female object (eye), an iris object and an identification object for iris. FIG. FIG.

“Preferably, the hosting platform recognizes at minimum one element in the scene. For example, element 390A as a target object. Other elements 390, which are not real-world elements, can be recognised as target objects. Element 390B, for example, could be a virtual object that has had its image captured in digital representation 334. Element 390B can also be an object that is beyond human perception, such as radio waves, network traffic congestion, network congestion, and other objects.

“The hosting platform analyses one or more recognized elements 390 in order to determine a context 322. This applies to the identified target object. When determining which context 332A or 332B is most relevant to a scene, multiple factors are involved. The context 332A example shows an attribute signature that indicates when context 332A is most likely to be applicable to a scene. The signature can be used to compare attributes of 390A recognized elements. If the attributes of recognized elements 390A are sufficiently similar to the signature, then the context 332A can also be considered to relate to at least the identified target objects and the scene as represented digitally by 334. The FIG. While context 332A is the only context that pertains to recognized element 390A in FIG. 3, it’s important to recognize that context 332A could be one of many contexts. Based on all information available digitally, 332 could also refer to recognized target objects or scenes.

“Contexts 332 may be created automatically or a priori. An entity (or AR object publisher) can define the appropriate context by using a context definition interface (332). To create a context object, the entity can input context attributes, signatures or any other information. A context can also be generated by the hosting platform. An example: If elements 390A are identified in digital representation 334, an individual can instruct the platform to convert those elements into a context signature.

“Context 332A may also include an attribute vector labeled PV. This represents attributes or properties 390 of scene elements that are relevant to the context in relation to determining interference between elements 390, whether recognized or not. The attribute vector may be subtle and not to be confused with context attribute signature. The attribute vector is a data structure that contains relevant attributes. The attribute vector can also be used as element selection criteria to deriving interference. It is important to remember that while a 390A recognized element might satisfy a context signature, it might not be aligned with the context’s attributes vector. A person could be identified as an AR player by being recognized as a target item. The person might be able to help determine the gaming context that applies to them or to the scene. The player may not have to identify interference between other gaming objects in the scene.

“Context attribute vectors may also be aligned within one or more normalized namesp aces. Each member of the vector may include an attribute name, and possibly values. Attributes can be considered multi-valued objects. To facilitate the mapping of elements and contexts between them, it is advantageous to use a common namespace.

“Context 332A may also include an interference function labeled F1. This describes how elements 390 in a scene interact with one another with respect to one additional element attribute to increase or decrease the presence of AR objects. The function of the AR objects 342 and the attribute vector is the preferred function of the interference function. AR objects 324 are AR objects that can be used to participate in the scene and also to be associated with context 332. As discussed above, AR objects 342 are identified by comparing the object attributes of AR objects 324 with context attributes of context 332.

“Element attributes, AR objects attributes, and other attributes that cause interference can include a variety of attributes. The interference of electromagnetic waves can be further extended by using the metaphor of interference. It depends on many factors (e.g. frequency, phase, amplitude, etc.). At a particular point in space. Element 390 may also be used to create interference based upon locations in more preferred embodiments. Digital representation 334 may include location information, such as triangulation, GPS coordinates and relative location to elements 390 in a scene. where the interference function may depend on location. The location information can refer to or reflect the actual physical location of elements in the real world. The interference between elements 390 may also be dependent on time. Digital representation 334 includes time information related to elements 390 and the scene in general.

“The interference function can also be used to create derived interference 350. The auto-generated interference criteria of derived interference 350 is used to determine which AR objects 342 are relevant to context 346 and how context relevant AR objects should be present in an augmented reality experience that uses interference among the elements 390. The interference criteria of derived interference 350 can be referred to as interference criteria that are derived from element properties (390, i.e. attribute values). The AR objects relevant to context 346 are those AR objects that meet the interference criteria. The interference function can be described as a summation of elements properties. This is similar to electronic magnetic wave interference. It can be calculated by adding amplitude and taking into consideration various properties (e.g. phase, time, frequency, location, etc.). In the simplistic example presented, the interference criteria are derived on an attribute-by-attribute basis by summing over the values over corresponding attributes of scene elements 390. A sum of all the attributes from all elements can yield a first criteria for a particular attribute. A member of the context relevant AR objects 346 is one that has at least one AR object 342 with attribute values that satisfy the interference criteria.

Although the example of derived interfere 350 is based upon a simple sum, it’s possible for the interference function to be arbitrarily complicated. The resulting interference function should yield an object satisfaction level in relation to the interference criteria. The satisfaction level is a measure of the presence of each AR object 346 in the augmented world. To accurately reflect context 332A’s utility, a satisfaction level can be calculated using a desired algorithm. A satisfaction level may be calculated based on a variety of factors, including optional conditions, requirements, and other interference criteria. A normalized measure of the extent to which object attributes exceed or fall below certain criteria thresholds or other algorithms. This satisfaction level can be used for instructing a remote AR-capable device how to interact with AR objects relevant 346.

The above description assumes that elements in a scene may interfere with one another to affect interactions with AR objects 346. The interference can be calculated based on all elements 390A in the scene, a part of the elements identified within the scene or a single element. The elements may also include AR objects 346 that an individual can interact with. Relevant AR objects 346 may also contribute to interference and effect their presence in much the same way that two interfering electromagnetic wave can cause their combined effect (e.g. interference patterns, amplitudes etc ).

Digital representation 334 can reflect the changing circumstances in a scene. Digital representation 334 can change over time, even in real-time. This means that contexts 332 may also change. Changes in contexts can cause derived interference 350 (time-dependent interference) to change, which can then change the AR objects 346. These changes can be transmitted back to AR-capable remote devices. Relevant AR objects 346 may have some degree of temporal persistence in some embodiments to allow smooth transitions between context states. Relevant AR objects 346 could be present and remain relevant even after the context 332 has been removed. A context 332 may have to be relevant for a specific time before AR objects 346 can be presented. This is a good approach for usability.

“While the discussion above discusses generating derived interfere 350 based upon contexts 332, it is important to note that derived interference 350 can be affected by changes between contexts 332. Contexts 332 can change with the scene, and even shift focus. From a first context to another context. One aspect of the inventive subject matter includes configuring AR-capable device to allow interaction with context relevant AR objects 346. This is based on context changes 332, but preferably as determined from derived interference 350. An individual may participate in an augmented reality experience that is associated with a gaming environment. The augmented reality experience may also include a shopping context if the individual chooses to buy a product. The shift in focus between a gaming and shopping context can adjust Derived Interference 350 to provide additional AR content. Alternately, the individual’s AR experience might not be affected by a shift of focus from a gaming environment to a travelling one. A shift in focus could mean that a context 332 is retained and not discarded in favor of a new 332 context. The degree to which a context 332 is related to a scene can be used to measure context focus.

“Interference-Based Presentation”

“FIG. “FIG. Both the 410A and 405B mobile devices capture digital representations of scenes with real-world elements 490. An AR hosting platform recognizes elements 490 and determines relevant AR objects that are related to the context 490. Relevant AR objects 446A, 446B are members of the relevant AR objects set.

“The AR hosting platform generates an AR interference based upon elements 490 to determine whether they destructively or constructively interfere with respect to a particular context. Mobile device 410A’s relevant AR object 446A is more prominent due to constructive interference between elements 490. Relevant AR object 446A has an enhanced presence due to constructive interference between elements 490. This indicates that it is likely to have a high satisfaction level in relation to interference criteria. Mobile device 410B captures a digital representation of the same scene with elements 490. The context indicates that relevant AR object 446B suffers from suppressed presence because of destructive interference between elements 490. Relevant AR object 446B has been negatively or weakly affected by elements 490, and is likely to have a low or negative satisfaction with regard to the interference criteria.

“One should also note that AR object 446B may be relevant AR item 446A. Based on relative satisfaction levels, however, different augmented reality experiences are provided by mobile devices. One possible reason for this could be user identification. This is where the digital representation of the scene altering its contexts incorporates the user’s information.

“Enhanced presence or suppressed presence” can take many forms, depending on the nature and context of the AR objects 446A, 446B and other factors related to the scene. Presence could be defined as the presence of relevant AR objects (enhanced or suppressed). However, presence can encompass a wide range of experiences. Take a look at the visual image of AR object 446A. You can overlay the visual image on an image of the scene that is opaque or covers elements 490. The relevant AR object 446B’s visual image might be transparent to indicate a suppressed present. Audio content in relevant AR objects 446A or 446B can also be played depending on the volume levels determined from each object’s interference criteria satisfaction level. The AR-capable devices’ presentation capabilities will determine if the presence of AR objects 446A and 446B can be enhanced or decreased for any human sense modality.

“Presence can extend beyond the human sense modalities. Functionality associated with AR objects 446A and 446B can be affected by enhanced or suppressed presence. The interaction between mobile devices 410A or 410B and relevant AR objects 446A and 446B can be controlled based on satisfaction levels. You might turn on certain features of relevant AR objects 446A, or make them available. However, features of relevant AR object 446,B could be disabled or turned off.

“Use Case”: Gaming and Promotional Contexts

“FIG. “FIG.5” illustrates an example of a use case for the inventive subject matter disclosed in relation to gaming and advertising. Multiple people use their mobile phones as mobile devices 510 in this example to compete for promotional offers on products in a store. Each mobile device 510 captures data associated with scene 595. This includes the products in the store. Each device sends its own digital representation 534 to the hosting platform 500. This may be captured using their own sensors 530. Digital representations 534 may include images of products and information about the mobile device (e.g. device identification, location, orientation or position). Other information, such as user identification and other information. The object recognition engine 560 uses digital representation 534 to identify the products in the store and recognize them as target objects represented by elements 590. Elements 590 represent known target objects that correspond to products in the store. The contexts 532 which relate to scene 595 are used by object recognition engine 560 to derive interference 550. Contexts 532 can include a shopping context or a store-specific context. Object recognition engine 560 uses interference 550 to identify relevant AR objects 546 among AR objects available in AR object repository 542. Platform 500 allows mobile device 510A interaction with AR objects 546A.

FIG. FIG. 5 is an example of an augmented world where people work together to create the environment around them. When a group of friends, acquaintances, or related people converges on a store, it can cause a number of promotions to be impeded (e.g. sales, coupons prizes incentives discounts). Members of opposing teams can block the availability of promotions to their team, while increasing the availability of promotions for their team. Mobile device 510A has been a member of a winning team and won a coupon for a product. The team is still short of sufficient members to be eligible for another promotion. They are asked to invite more friends. The AR object message bubble for the second product has been suppressed (i.e. it is not available to promote) because of interference from opposing team members.

“For the simplified example of FIG. 5. Interference 550 can be described by the relative number of team members: Team A members minus Team B members. If the value is high, Team A wins. Team A wins when the value is high. Team B wins when it is low. Platform 500 analyzes AR object attributes based on team members. It then instructs mobile device510A to allow interaction with AR objects 546A based on interference 550, as determined by each object’s satisfaction level. The presence of AR objects 546A may change as team members enter or leave a scene.

“The promotion and gaming use case is one example of context derivation. A medical context in which medical equipment is present (e.g. X-Rays, MRIs, dentist chairs, etc.) could be another similar example. A doctor, patient and the AR medical object should be available. A patient may enter a doctor’s practice with a doctor present. The doctor can use a tablet or pad-based computing device such as an iPad?, Xoom??, PlayBook?. etc.). If the doctor and patient are not together, they can constructively interfer with patient’s AR-based medical record on the pad. If the patient and doctor are present with others, they can interfere destructively with AR-based medical records.

“Use Case”: Object-Based Message Boards

“FIG. “FIG.6 illustrates another use case for hosting platform 600, which allows annotation of specific objects in scene 695. Mobile devices 610 can be used by multiple people to capture digital representations of scene 695 using sensors 630. As indicated by the geometric shapes, there are multiple objects in this scene. Platform 600 attempts recognition of the objects and context 695. Platform 600 then identifies the AR objects 646 that are relevant to the objects. AR object 646A is a message board with messages. The messages and board are bound to the AR objects. It is important to note that messages can be bound to any object other than the particular object.

“One must appreciate that messages displayed on the message board are available based upon scene 695 context and interference between elements of the scene. A personal message was attached to the object in the real world by the owner of the mobile device 610A. The device owner and object are present together so that the personal message can be presented. The message board also lists messages that are directed at the general public. AR objects 646A can be accessed via message exchange from mobile devices and other AR-capable devices. This is even though AR object 646A refers to a message that has been bound to a real-world item. AR objects 646A can be presented as messages, or a message board bound with a particular element recognized as a target object. As previously discussed, AR objects 646 may include products that can be purchased, promotions (e.g. coupons, prizes and incentives, sales discounts, etc. ), content (e.g., images, video, audio, etc. A reward for achievement, a token or a clue to a game or puzzle, unlocked augmented reality experiences, application data, reviews or any other type of content.

“Interactions”

Once the AR hosting platform has identified the relevant AR objects for a particular context, the platform will instruct the mobile device or other AR-capable device to allow it to interact with the AR objects. Hyper-linked AR objects are possible in some embodiments. This allows a user to click or select an AR object and then the device will access additional information via the network. The AR objects may also include instructions that can be copied to the tangible memory of the device. Instructions instruct the mobile device how to interact with the AR object and remote computing devices. Interactions may include many possible interactions between and among AR-capable devices or AR objects.

“One of the most interesting interactions is allowing an AR-capable device participate in a transaction with a commerce engines. A commercial transaction could include selecting, purchasing, or other monetizing interactions between member objects of the relevant AR objects. Interaction with one or more online accounts can be part of a commercial transaction. These AR objects of interest may contain instructions or other information that allow the device and remote servers to communicate with each other. This eliminates the need for the device to be familiar with financial protocols in order to complete the transaction. Other examples of commercial transactions include: interacting infrequently flyer miles programs; exchanging virtual currencies in an online word; transferring funds from one account to the other; paying tolls and interacting with a point?of-sales device. Paying utilities, taxes and other interactions that have monetary value.

“Managing AR objects is another type of interaction that has been considered. AR content consumers would be people with cell phones in more common embodiments. AR content consumers need to have many different needs in order to manage AR objects. An AR object hub can be used by a cell phone to allow users to interact with others nearby phones and create augmented reality experiences. An AR-capable phone or another device can also be used as an AR object hub. This allows the phone to store or protect permanent AR objects that are bound to the person or their cell phone. Management includes monitoring AR objects and granting authorization. It also includes establishing alerts or notification.

Management interactions may also be applicable to publishers or creators of augmented reality content. AR-capable devices can be used as content creator utilities. Interactions may also include allowing AR-capable device to interact with AR objects. AR-capable devices can be used by publishers, including their own mobile phones, to interact and create AR objects. This includes creating AR objects from elements in a scene, populating scenes with AR objects, managing revisions or version of AR objects, debugging AR object creations, publishing AR objects for consumption, binding AR objects to real-world elements or AR objects, and establishing interferences functions to contexts or scenes.

“It should be obvious to those skilled in art that many additional modifications beyond those already described can be made without departing from these inventive concepts. Therefore, the inventive subject matter is not to be limited except within the scope of the attached claims. Furthermore, both claims and specifications should be read in a broadest way consistent with their context. The terms “comprises” and “comprising” should be interpreted in accordance with the context. Particularly, the terms?comprises? and?comprising are to be understood as referring to elements, components, or steps. should be understood to refer to elements, parts, or actions in a non-exclusive way. This means that elements, components or steps can be present, utilized, combined with other elements or components, and steps that aren’t explicitly referenced. The specification claims must refer to at least one item from the group consisting A, B and C. . . and N, it is important to understand that the text requires only one element of the group. Not A plus N or B plus N, but just one element.

Click here to view the patent on Google Patents.

How to Search for Patents

A patent search is the first step to getting your patent. You can do a google patent search or do a USPTO search. Patent-pending is the term for the product that has been covered by the patent application. You can search the public pair to find the patent application. After the patent office approves your application, you will be able to do a patent number look to locate the patent issued. Your product is now patentable. You can also use the USPTO search engine. See below for details. You can get help from a patent lawyer. Patents in the United States are granted by the US trademark and patent office or the United States Patent and Trademark office. This office also reviews trademark applications.

Are you interested in similar patents? These are the steps to follow:

1. Brainstorm terms to describe your invention, based on its purpose, composition, or use.

Write down a brief, but precise description of the invention. Don’t use generic terms such as “device”, “process,” or “system”. Consider synonyms for the terms you chose initially. Next, take note of important technical terms as well as keywords.

Use the questions below to help you identify keywords or concepts.

  • What is the purpose of the invention Is it a utilitarian device or an ornamental design?
  • Is invention a way to create something or perform a function? Is it a product?
  • What is the composition and function of the invention? What is the physical composition of the invention?
  • What’s the purpose of the invention
  • What are the technical terms and keywords used to describe an invention’s nature? A technical dictionary can help you locate the right terms.

2. These terms will allow you to search for relevant Cooperative Patent Classifications at Classification Search Tool. If you are unable to find the right classification for your invention, scan through the classification’s class Schemas (class schedules) and try again. If you don’t get any results from the Classification Text Search, you might consider substituting your words to describe your invention with synonyms.

3. Check the CPC Classification Definition for confirmation of the CPC classification you found. If the selected classification title has a blue box with a “D” at its left, the hyperlink will take you to a CPC classification description. CPC classification definitions will help you determine the applicable classification’s scope so that you can choose the most relevant. These definitions may also include search tips or other suggestions that could be helpful for further research.

4. The Patents Full-Text Database and the Image Database allow you to retrieve patent documents that include the CPC classification. By focusing on the abstracts and representative drawings, you can narrow down your search for the most relevant patent publications.

5. This selection of patent publications is the best to look at for any similarities to your invention. Pay attention to the claims and specification. Refer to the applicant and patent examiner for additional patents.

6. You can retrieve published patent applications that match the CPC classification you chose in Step 3. You can also use the same search strategy that you used in Step 4 to narrow your search results to only the most relevant patent applications by reviewing the abstracts and representative drawings for each page. Next, examine all published patent applications carefully, paying special attention to the claims, and other drawings.

7. You can search for additional US patent publications by keyword searching in AppFT or PatFT databases, as well as classification searching of patents not from the United States per below. Also, you can use web search engines to search non-patent literature disclosures about inventions. Here are some examples:

  • Add keywords to your search. Keyword searches may turn up documents that are not well-categorized or have missed classifications during Step 2. For example, US patent examiners often supplement their classification searches with keyword searches. Think about the use of technical engineering terminology rather than everyday words.
  • Search for foreign patents using the CPC classification. Then, re-run the search using international patent office search engines such as Espacenet, the European Patent Office’s worldwide patent publication database of over 130 million patent publications. Other national databases include:
  • Search non-patent literature. Inventions can be made public in many non-patent publications. It is recommended that you search journals, books, websites, technical catalogs, conference proceedings, and other print and electronic publications.

To review your search, you can hire a registered patent attorney to assist. A preliminary search will help one better prepare to talk about their invention and other related inventions with a professional patent attorney. In addition, the attorney will not spend too much time or money on patenting basics.

Download patent guide file – Click here