Software – David M. Hill, Andrew William Jean, Jeffrey J. Evertt, Alan M. Jones, Richard C. Roesler, Charles W. Carlson, Emiko V. Charbonneau, James Dack, Microsoft Technology Licensing LLC

Abstract for “Mixed Environment Display of Attached Control Elements”

“The technologies described herein allow for a mixed environment display with attached control elements. These techniques allow users of a first computing devices to interact with remote computing devices that can control objects, such as light bulbs, appliances, and other appropriate objects. The configurations described herein allow the first computing device, through capture and analysis of input data, to perform one or more actions. With a real-world view, rendered graphical elements can be used to control the object.

Background for “Mixed Environment Display of Attached Control Elements”

“Internet-connected devices are becoming more commonplace in everyday life. These devices allow users to communicate and control almost anything remotely. Internet-connected devices can control lights, thermostats and automobiles. There are many creative uses of Internet-connected devices in many industries.

While current technologies offer many benefits, product designers for Internet-connected devices still face many challenges. To make such devices more affordable and ubiquitous, they don’t often use expensive user interface components such as touch screens or display screens. Instead, they look for energy-efficient and low-cost options to increase their affordability. Common designs may use Web server software to allow users to interact with the device remotely via a Web browser. These configurations allow users to view and receive status information, and to send commands to the device via a Web page.

Web-based interfaces are more affordable than touch screens and display screens, but they don’t always offer a pleasant user experience when using these devices. When a user has to interact with many devices, managing tasks can be difficult. A user who manages hundreds of Internet-connected devices will need to keep Web address records and navigate to each one independently. Due to the low cost of many common designs, user interfaces created by Internet-connected devices don’t provide a good layout for status data and control buttons.

“It is with regard to these and other considerations, that the disclosure herein is presented.”

“The technologies described herein allow for a mixed environment display with attached control elements. Configurations allow users of a first computing devices to interact with remote computing devices that can control objects, such as light, appliances, appliances, or any other suitable object. Some configurations allow the first computing device to cause one or more actions. This includes the selection of an object and the display of a User Interface. The input data is captured and analysed, which defines the performance of one or several gestures. For example, a user can look at the object controlled from the second computing device. The displayed user interface can include graphical elements that control the object. In some cases, the elements can also be displayed with a real world view of the object. You can use the graphical elements to show associations between the displayed content and the real-world object view, such as status data or control buttons. Graphically attached user interface elements allow users to quickly identify the associations between objects observed and content displayed.

“In certain configurations, the first computing device (e.g. a head-mounted display (HMD)), may include a transparent section that allows a user to see objects through the surface’s hardware display surface. The hardware display surface can also display rendered elements over and around objects that are viewed through it. The control data can be obtained by the first computing device. This data can define one or more commands to control a second computing device. For example, the second computing device could be configured to control an object such as an appliance or lamp or garage door opener.

The first computing device may then display a graphic element that contains the one or more commands displayed on the hardware display. Through a transparent portion of the display surface, the graphical element can be displayed as a real-world view. The graphical element can also display status data from the second computing devices. The first computing device can interpret input commands or gestures from users to create data. The data that defines the input command can be communicated to the second computing devices by the first computing device. In response to the input command, the second computing device can control and manipulate the object.

The first computing device may also allow users to select remote computing devices by using natural gestures or other inputs. A user can choose the second computing gadget by looking at it or by controlling an object using the second computing devices through the display surface of the first computing apparatus. The first computing device can, among other actions, initiate communication with the second device and/or control an item by communicating with it.

“It is important to note that the subject matter described above may be used as a computer controlled apparatus, a process or a computing system. It could also be implemented in an article of manufacture, such as a computer readable medium, or as a computer control device. These and other features can be seen in the following detailed description and the accompanying drawings.

This Summary presents a few concepts in simplified form. They are described in detail below. This Summary does not identify the key features or essential features in the claimed subject matter. It is also not intended to limit the claims subject matter. The claimed subject matter does not include implementations that eliminate all or some of the disadvantages described in this disclosure.

“The technologies described herein allow for a mixed environment display with attached control elements. These techniques allow users to interact with a first computing device that controls an object. This could be a light, vehicle, thermostat or other suitable object. The configurations described herein allow the first computing device, through capture and analysis of input data, to perform one or more actions. The displayed user interface can include graphical elements that control the object. In some cases, the elements can also be displayed with a real world view of the object. You can use the graphical elements to show associations between the displayed content and the real-world object view, such as status data or control buttons. Graphically attached user interface elements allow users to quickly identify the associations between objects observed and content displayed.

“In certain configurations, the first computing device (e.g. a head-mounted display (HMD)), may include a transparent section that allows a user to see objects through the surface’s hardware display surface. The hardware display surface can also display rendered elements over and around objects that are viewed through it. The control data can be obtained by the first computing device. This data can define one or more commands to control a second computing device. A second computing device could be, for instance, a controller for a light or appliance or any other item that can controlled by a computer.

The first computing device may then display a graphic element that contains the one or more commands displayed on the hardware display. Through a transparent portion of the display surface, the graphical element can be displayed as a real-world view. The graphical element can also display status data from the second computing devices. The first computing device is able to interpret input or gestures from users and generate data that defines an input command. The data defining the input command can be communicated by the first computing device to the second computing devices for controlling and/or insinuating the object, at least partially, based on the gesture or input of the user.

“The first computing device may also allow users to select remote computing devices by using gestures or other input methods. A user can choose a remote computing gadget by simply looking through the display surface of the first computing devices. After selecting a remote computing devices, the first computing device can initiate communication and/or perform many interactions such as those described herein.

“Using the technologies described herein allows a user to interact with many remote computing devices without having to navigate through large volumes of machine addresses or credentials. The disclosed technologies allow a user to interact with remote devices selectively by looking at the object or device controlled by it.

An interactive, mixed-environment display allows a user to see graphical elements that contain status data and contextually relevant controls. They can also view the real-world object or device they are working with. The technologies described herein can improve a user’s interaction and productivity with one or more devices. This may include reducing the number of accidental inputs, decreasing the processing resource consumption, and minimizing the impact on network resources. Implementing the technologies described herein may also have other technical benefits.

It should be noted that the subject matter described above may be used as a computer controlled apparatus, a computer processing system, or an article of manufacture, such as a computer readable storage medium. These and other features can be seen in the following Detailed Description, as well as the accompanying drawings. The claimed subject matter does not limit to solutions to any or all of the disadvantages described in this disclosure.

“The subject matter discussed herein is presented in the context of techniques to provide a mixed environment display with attached control elements. However, it can be appreciated the techniques described herein could also apply to any scenario in which two or more people are communicating with each other.”

“As will become clearer herein, it is possible to appreciate that the implementations of these techniques and technologies may include solid state circuits and digital logic circuits. Computer component and/or software running on one or more devices. These signals may include digital and analog signals to communicate a change in state, movement or any data associated with the detection of motion. Any type of input device or sensor can be used to capture gestures.

The subject matter is described in the context of program module execution in conjunction with an operating system or application program on a computer. However, those who are skilled in the art will know that other implementations can be combined with other program modules. Program modules generally include routines, programs and components as well as data structures. These structures can be used to perform specific tasks or implement abstract data types. The art is also applicable to other configurations of computers. This includes multiprocessor systems, hand-held devices, handheld electronics, microprocessor-based consumer electronics, microprocessor-based, programmable consumer electronics, mainframe computers, minicomputers and the like.

“In the following detailed description, we refer to the accompanying drawings, which form a part of this document and in which specific configurations or examples are illustrated. In the drawings, like numerals refer to like elements throughout the various figures. They illustrate aspects of a computing device, computer-readable storage media, and computer-implemented methods for providing a mixed display environment of attached control elements. FIGS. will be discussed in greater detail. There are many services and applications that can implement the functionality and techniques described in FIGS. 7-9.

“FIG. “FIG. This invention is for providing a mixed environment display with attached control elements. One example environment 100 may include one or several servers 110, one, or more networks 150 and one or two user devices 102 that are associated with a user 101. The controller devices 103A to 103E can also be included (collectively,?controller device 103?). The user device 102, and controller devices 103 can be referred to as “computing devices” for illustrative purposes.

“FIG. “FIG. The first controller device, 103A, is designed to control and interact with a garage door opener (104A). FIG. 1. The second controller device, 103B, is designed to interact and control range 104B. The third controller device, 103C, is configured for interaction with and controlling refrigerator 104C. The fourth controller device, 103C, is designed to interact and control first lamp 104D. The fifth controller device, 103E, is designed to interact and control second lamp 104E. The garage door opener 104A and range 104B, refrigerators 104C, 104C, 104C, 104C, and 104C as well as the first lamp 104C and second lamp 104E, are referred to herein under the name “objects 104”. The techniques described herein allow users to interact with the objects 104 using gestures and other inputs.

“The FIG. “The example shown in FIG. 1 is intended to illustrate and not be taken as a limitation. The example environment 100 could include any number 103 of controller devices, any number 102 of user devices, any number 101 of users, any number 110 of servers, and/or any number 104 of objects. You can also see that objects could include other types of items than those in FIG. 1.”

“A controller device103 can be used as a standalone device or in conjunction with other computers such as servers 110 and other controller devices 103. The controller device 103 may be a single-board portable computer that can control other devices or objects. The RASPBERRY-PI is one example of a commercially accessible controller device. Other examples include PHOTON (WiFi) and ELECTRON (2G/3G cellular), both produced by PARTICLE.IO. The controller device 103 could also be a personal computer, or any other computing device that has components for communicating with a network or interacting with one or several objects.

The user device 102 is a standalone device that can be used alone or in conjunction with other computers such as servers 110 and other users devices 102. The user device 102 may be a personal computer or a wearable device, such as an HMD. It can also include components that allow communication with a network and for interaction with the user 101. The user device 102 can also be configured to accept input commands from 101. This includes gestures captured by an input device (such as a touchpad, camera, or keyboard).

“The user device (102), controller devices (103), servers 110, and/or other computers can be connected through one or more local/or large area networks, such the network 150. The computing devices can also communicate using any technology such as BLUETOOTH or WIFI, WIFI Direct, NFC, and any other suitable technology. This includes wired, light-based, or wireless technologies. You should know that there are many other types of connections than those described in this article.

“Servers 110 can be a personal computer or server farm or large-scale system, or any other computing system that has components for processing, coordinating and collecting data. The servers 110 may be linked to one or more service providers in some configurations. One service provider could be any company, person, or other entity that leases or shares computing resources to facilitate the disclosed techniques. Server 110 may also contain components and services such as the application service shown in FIG. 8 to execute one or more of the techniques described in this document.”

“Referring to FIG. “Referring now to FIG. 2, aspects of controller devices 103, user device 102 and servers 110 will be described in greater detail. A server 110 may include a local storage medium 180. This is also known as a computer-readable storage medium. It can store data such as input data 113, which can be generated and used by a device. The local memory 180 of the servers 110 can store status data 114 that is related to one or more computing device or an object. One or more servers 110 may store duplicates of data on controller devices103 and/or user devices102. This allows a central service to coordinate aspects between a number client computers such as controller devices103 and/or user devices102. You can see that servers 110 could store other types of data than the ones shown in FIG. 2.”

“In certain configurations, a server 110 may include a server module107 that can execute some or all of the techniques described in this document. An interface 118 can be added to server 110, which could include a screen to display data. Server 110 can also contain an input device 119. This could include a keyboard, mouse or microphone as well as any other device that is capable of generating signals and data to define any user interaction with server 110. Server 110 can be configured in certain configurations to allow remote access to the servers 110.

“In certain configurations, an individual controller device (103) can contain a local memory 180. Also known as a computer-readable storage medium (?computer-readable storage media?), stored data, input data 113 that can be generated and generated by a device. Local memory 180 can be used to store status data 114 for one or more controller devices 103. A controller module 121 can be stored in the local memory 180 of a controller device 103. It is designed to execute any or all of the techniques described. The controller module 121 can be used as a standalone module, or it can work in conjunction with other modules and computers such as the servers 110 or other user devices 102.

“The controller device 103’s local memory 180 can store control data 115. This can contain code, commands, and instructions that can be executed using a controller device.103. The control data 115 can contain commands, code and object code. It may also include scripts or code that can be executed by a controller device 103 to accomplish one or more tasks. Communication interfaces can be defined by the control data 115, such as the Application Programming Interface (API). This data, also known as “input command”, can allow remote computers to send command data to a remote computer. API to send data to controller device 103.

“Control data 115 may be provided by a controller device (103), or control data data 115 by another resource (115). This could include a service publishing aspects the control data. 115. The control data 115 can be obtained by a computing device such as the user device 102, which allows it to issue commands in accordance with control data 115. This allows the computing device to control aspects of controller device 103 and influence or control associated objects.

“A controller device 103 may also contain a control component 122, which allows for interaction with one or more objects.104 For example, a control component 122 can contain electrical relays that control power to one or several objects 104, actuators that control the movement of one, two, or all of the objects 104 and/or components 104, as well as any other device that allows control or communication with an object 104. A sensor 120 can be included in an individual controller device 103. For example, a sensor 120 can contain a camera or touch sensor as well as a proximity sensor or death field camera. It also includes any other input device that generates status data 114 about an object 104 or controller devices 103.

“Some configurations include a local memory 180, which stores input data 113 that can be generated either by a device or the user. The input data 113 may be generated or received by any of the components of the user-device 102. This includes a sensor 120 and an input device 119. An internal or external resource can also generate the input data 113, such as a GPS, compass or other suitable components, as shown in FIG. 9. The sensor 120 can be used as a location tracking device. One or more input devices (such as a camera or microphone) can generate input data.113. This can be used to define any form of user activity such as gestures, voice commands and gaze direction. The input data 113 may also be received by one or more systems such as an email system, search engine or social network. These systems include the 814-824 services and resources shown in FIG. 8. Based on input data 113 from multiple sources, one or more actions can be performed, including the selection or interaction of an object.

The memory 180 of the device 102 can be used to store status 112 related to one or more components and devices. Memory 180 can be used by the user device to store a device modules 111 that are designed to manage the techniques and interactions between the user 101 and the device 102. As will be explained below, the device module 112 can be used to communicate and process control data 115, status information 114, input data 113 and other data. The device module 111 may also be configured to execute surface reconstruction algorithms, other algorithms for locating and capturing images of objects 104. The device module 112 can be a productivity, game, or operating system component, as well as any other applications. The device module 111 allows a user to interact in a virtual environment or an augmented reality setting, as will be explained below.

“A user device 102 may include a hardware surface 118 (also known as interface 118?). Display renderings and other views as described in this document. One or more components can be included in the hardware display surface 118, including a projector or flat- or curved-screen screen or any other components that allow for a view of an object or data to be displayed to the user 101. The hardware display surface 118 can be configured to cover at most one eye in some configurations. One example is that the hardware display surface can be set up to cover both eyes for a user 101. One or more images can be rendered by the hardware display surface 118 to create a monocular, binocular or stereooscopic view of one or several objects.

“The hardware display surface 112 may be set up to allow 101 users to view objects in different environments. The hardware display surface 118 may display a rendering or image of an object in some configurations. A few configurations of the hardware surface 118 allow users 101 to see through selected sections. This allows them 101 to view objects in their environment. A’real-world view’ is the user’s perspective when looking at objects through the display surface 118. “Real-world view” of an object, or an object.

“The display surface for hardware 118 is described as having a field of view” Or a ‘field of vision? This can be defined as the area of observable that is visible through the display surface 118 at any moment. The direction of the field, or gaze direction, that an object sees when it is viewed through the hardware display surface is shown in the examples. Gaze direction data, which defines the direction of the field view of the hardware surface 118 and the field view of the user device (102), can be generated by any number of devices. These include a compass or GPS component, a camera, and/or a combination thereof for generating position and direction data. Gaze direction data can also be generated by analysing image data from objects with a known location.

“As we will describe in detail below, computer-generated renderings of objects or data can be displayed within, around, and near the transparent sections 118 of the hardware LCD display surface. This allows a user to see the computer generated renderings as well as a real-world view through selected portions of the display surface.

“Some configurations described in this document provide both a’see through display? An?augmented reality display? The?see-through display is used for illustration purposes. The?see through display may also include a transparent lens with content that can be displayed. The “augmented reality display” An opaque display may be used to display content on top of a rendering image. This may be any source such as a video feed taken from a camera that captures images of the environment around the user device. Some examples herein show a display that displays rendered content over an image. Some examples herein also describe methods that display rendered content on a?see-through display? The user can see the real-world object along with the content. The techniques described in this article can also be applied to a “see through” display. An?augmented reality display? is another example of a technique that can be used. Or variations and combinations thereof. Devices that enable a’see through display’ are shown for illustration. Devices that enable a?see through display? devices capable of creating a’mixed environment’? display.”

“A user device 101 can include an input device 120, such as a keyboard or mouse, microphone, camera, depth camera camera, touch sensor, camera, microphone, keyboard, and any other device that allows the generation of data characterizing interactions. FIG. 2. An input device 119 such as a microphone or camera can be placed on the front of user device 102, as shown in FIG.

“The user device 120 can include one or more sensors 120. These could include a sonar sensor or infrared sensor, compass or accelerometer. The sensor 120 can be used to generate data that characterizes interactions with the device 103, such as user gestures. One or more sensors 120, or an input device, 119, can be used to generate input data 113. This can define a position and other aspects of movement (e.g. speed, direction, acceleration) of one or several objects. These can include devices 103 and physical items located near a device, 103, or users 101. The input device 119 and/or the sensor 120 can also be used to generate input data 113 that defines the characteristics and presence of an object. Such components can be used to generate data that defines a characteristic, such as the color, size, shape or other physical characteristics of an object.

“Configurations allow the user device102 to capture, interpret and generate image data for the field of view described previously. A characteristic data can also include the object’s location. Data defining the location of an object can be generated using a variety of components such as a GPS, network, or one or more of the sensors described herein.

“In an illustrative case, the input devices 119 and 120 can generate input data 113 that identify the object that user 101 is looking at. This is also known as a ‘gaze target. A gaze target can be identified using sensors 120 and/or input device 119 in certain configurations. This data allows the user to identify the direction they are looking at, also known as the ‘gaze direction. A sensor 120, for example, can be mounted to the user device 101 and directed at a user’s field. The input device 119 or the sensor 120 can generate image data that can be used to analyze the data to determine whether an object is located in a predetermined area of the image data. A device can identify if an object is located within a predetermined area of at most one image, such the center of the image.

Data from various input devices 119, 120 and sensors 120 can be used to identify a gaze target or direction. A compass, positioning tracking device (e.g. a GPS component) and/or accelerometer can be used to produce data that indicates a gaze direction as well as data indicating the exact location of an object. These data can be used to determine if the object is a target. Data such as speed and direction data can also be used to identify a gaze direction and/or gaze target. If a user 101 observes a vehicle moving at a certain velocity and direction, this data can be transmitted to the user device102 from the vehicle. This data can then be combined with other data using known techniques to determine if the vehicle is a gaz target.

“In certain configurations, at least one sensor 120 can be pointed towards the user’s eye. To identify a gaze direction or target, data indicating the position and direction of at least one eye may be used. These configurations are useful when the user is viewing a rendering of an object on a surface like a hardware display surface. One example is that a HMD can have two distinct objects displayed on the hardware display surface. 120. The one or more sensors directed at at least one eye by the user can generate eye position data. This information indicates whether the user is viewing the first or second rendered object. Eye position data can be combined and other data, such gaze direction data, to identify a gaze target. This data can be based on either a rendered object on the surface 118 or an actual-world object seen through the surface 118.

“A user device 102, such a HMD, can allow the selection of an object (104), controlled by controller device 103. This is done by capturing and analysing input data 113 that defines a performance of one, more, or other forms input. The selection of an object may also refer to the selection 103 of a controller device associated with the object. 103 and 103 could also be used for the selection 103 of the object. The selection of object 104 causes user device 102, as will be explained in detail below to get address information for communicating with one or more controller devices 103.

The user device 102 is able to perform one or more actions once an object 104 has been selected. The user device 102, for example, can initiate communication with remote computing devices and/or cause display of one or more graphics elements that display content. This includes commands to control controller device 103. The user device 102 can also generate data that defines an input command in response a selected graphical element. Data defining the input commands may be sent to the controller device103 for execution. Execution of these commands can cause the controller device to control the object (104), obtain a status from it 104, or perform other interactions with it 104. The status data associated with controller device 103 can also be communicated to user device 102 to display it to user 101. FIG. FIG. 3D provides an illustration of such techniques. A user device 102 (such as an HMD) is used to interact with two controllers 103 and 103, respectively, associated with two lamps.

“FIG. 3A shows an example view 300, as seen from the perspective of user device 102. This HMD has the field of vision directed towards the first lamp (104C) and second lamp (104E), which collectively are referred to as “objects 104”. The view 300 shows a real-world view 104 of the objects through the display surface 118 on the user device 102. This example view 300 shows a configuration that involves a real-world view 104. However, the image data generated from a camera pointed at the field of view is used to render the objects 104 on the hardware display surface.

The user 101 can use the user device102 to select the object 104. This is done by directing the view of the device 102 towards the object 104. This can be done, for instance, by turning the field of view of the user device 102 towards an object of interest. In this case, the user 101 turns his field of vision toward the first lamp (104D). The techniques described herein allow you to select the first lamp (104D) and/or the associated controller device (e.g. the fourth controller device (103)D). The user device 102 may perform one or more actions once an object has been selected (e.g., the first lamp 104-62). The user device 102, for example, can display a graphic element, create an audio signal and/or notify the object selection. Another example is that the fourth controller can send control data to the user device 101D in response to selecting the object. This data may include one or more commands to control the fourth controller device (103D) and/or first lamp (104D). The control data from the fourth controller can be used to communicate instructions to control a component. For example, it could dim the lamp or turn the lamp on/off. Another example is that the user device102 can be notified of a selected object by receiving status data. This data defines the status of one or more components and devices.

“FIG. “FIG. When the first lamp is selected, it will display the displayed view to the user 101. The selection can trigger the display of one or several graphical elements and the real-world view for the first lamp 104D. A first graphical element (303A) indicates that the first lamp is 104D. The first graphical element (303A) is a combination of rendered lines and circles that indicates that the first lamp, 104D, has been turned on.

“Also seen in FIG. 3B, a second graphic element 303B contains status data and selectable controls elements for controlling the fourth controller device (103D) and the first lamp (104D). The status data for this example includes an identifier (104D) for the first lamp, e.g., “Lamp 1: Family room”. The status data also shows how many hours the lamp was used. A set of selectable control elements (305) can be included in the second graphical element, 303B. These elements allow for display of additional commands and/or status data, such as submenus (not illustrated). Based on control data, the second graphic element 303B may also include one or more selectable controls elements 306 to control the first lamp (104D) and/or fourth controller device (103D). After selecting one or more control elements 306, the user device 102 can process a command. The selectable control elements can be used in this example to turn on the lamp or change its brightness.

“As shown at FIG. 3B, the second graphic element 303B shows an association between the contents 303B and the real world view of the first lamp. The shape of the second graphic element 303B can be used to indicate one or more associations. The second graphical elements 303B are positioned around the first lamp (104D) to display the different associations. The association between the displayed content, the fourth controller devices 103 and the graphical element 303B may also be shown. This could include a rendering or real-world view for the fourth controllers device 103D. The graphical element’s shape is used to identify an object and its contents. However, any other indicator such as an audio signal or color can also be used.

“As we have seen, the user device 120 and input device 119 can be used to capture gestures or other inputs to generate input data.113. The input data 113 and control data 115 can be processed by the user device 102 to create a command that executes computer-executable instructions at controller device 103. FIG. FIG. 3C illustrates an example of how the user device (102) is used to capture the gestures of the user 101 in order to generate and communicate a command to control the fourth controller device (103D) and the first lamp (104D).

“FIG. 3C is the example view 300. The real-world view shows the first lamp, 104D. A user gesture interacts with the second graphic element 303B. The sensor 120, which is a camera that is directed towards the field of view of the user device102, captures the gesture of the user 101 in the present example. FIG. 3C shows that the user 101 is performing a gesture to indicate a selection of a control element configured to change the brightness of the first lamp 104D. 3C shows that the user 101 makes a gesture to select a control element which will change the brightness of the first lamp. Once such an input is detected data that defines a command can communicate from the user device 101 to the fourth controller device (103D) and the first lamp (104D).

“In certain configurations, the user device102 can alter the display of the graphic elements related to the first lamps 104D based upon one or more actions. The graphical elements can be removed in one example. When the user 101 turns away from the first lamp, 104D. Other user actions such as voice inputs, gestures, or any other type of input can also result in the removal of graphical elements. Other user actions can cause the device 102 take other actions, as will be explained in detail below.

The user 101 interacts with the second lamps 104E in the present example by looking at, for instance, the second lamp.102E and directing his field of vision away from the first lamp (104D) and towards the second lamp (104E). The techniques described herein can be used to detect such activity and remove the display of the first two first graphical elements (303A, 303B), select second lamp, 104E, and display additional graphical elements related second lamp, 104E, and fifth controller device, 103E.

“FIG. “FIG. When the second lamp is selected, 101 can be displayed. In the following example, the third graphical element (303C) indicates that the second lamp is 104E. In the illustrative case, the third graphic element 303C indicates that second lamp 104E has been turned off. A fourth graphical element, 303D, is also shown. It contains status data and commands to control the fifth controller device (103E) and second lamp (104E). The fourth graphical element, 303D, contains the identifier, status information, and selectable controls elements, as shown in FIG. 3B.”

“The methods described herein allow users to interact simultaneously with multiple objects or devices. In the example above, where the lamps are involved, the user can control the lamps using the view in FIG. 3A. The user device 102 can select both the lamps if they are in the same viewing area. One or more graphical elements may be displayed in such configurations to control both lamps. Alternativly, or in addition, the user device102 can generate a notification to indicate that both lamps have been selected. A notification can include the generation of an audio signal. For example, a voice that indicates which lamps have been selected. Or another signal that can be used to provide such a notification. The user device 102 is able to control both the lamps and the other objects once they have been selected. The user can use gestures or input commands, which could include a voice command to control the lamps. The user might say, “turn all lights off?” Or, the user could state,?turn all lights off? This example is intended to illustrate the concept and should not be taken as a limitation. It is clear that the configurations described herein can be used for controlling any number of objects or remote controller devices 103.

Consider the following scenario: A user stands outside a house. A number of objects can be selected by looking at the house from a distance. To control specific objects or groups of objects, the user can use gestures or other input. The user can also voice-command to turn off all lights. Or?turn all lights off? You can also point your finger at the thermostat and say?turn the heat up until 71 degrees? You can use one or more technologies to interpret such inputs to select and control objects and categories by sending appropriate commands to controller devices 103.

“FIG. 4A to FIG. 4A through FIG. 4C show another example of a user device (in the form HMD) that is used to interact the garage door opener, 104A. This illustrative example shows how the user device (102) is directed to the garage opener, 104A. It creates a real-world view through the interface 118. FIG. FIG. 4A illustrates an example view 400 which can be displayed to user 101 in this scenario. The user device 102 has the ability to select the garage opener 104A and/or an accompanying computing device, e.g. the first controller device (103A), when such a scenario can be detected using one or more of the techniques described herein. The user device 102 may perform one or more actions upon selecting the garage opener 104A. This could include the display of a graphic element that contains a selectable element to control the garage opener 104A.

“FIG. “FIG. The user device 102 may cause the display of a graphic element 402 in addition to the real-world view for the garage opener, 104A. The graphical element 402 in this example includes selectable control elements. These control elements allow the user device to control the garage opener 104A. One or more of the selectable control elements can then be selected to cause the garage door’s opening or closing. A selectable control element can also be selected to initiate a software update for garage opener 104A or the first controller device, 103A. The graphical element 402 can contain content such as status data. FIG. 4B indicates a maintenance due date as well as data defining recent user activity. This example shows that the user activity identifies the garage opener 104A user and a timestamp of the last time the garage opener 104A was used.

“As illustrated in FIG. 4B, the graphical elements 402 are designed to indicate an association between the graphic element 402 (FIG. The graphical element 402’s outline is shown in this example. It contains lines that point toward the garage door opener 104A. While lines are used to indicate an association between the contents 402 and garage door openers 104A, it is easy to see that any shape, color or other indicator could be used to indicate the association between objects, such as garage door openers 104A and other content.

“FIG. 3C is an example view 400 of a real-world garage door opener 104A. It shows a gesture interaction between the user and the graphical element 402. The sensor 120 on the user device captures the gesture of the user 101 in the present example. FIG. 3C shows that the user 101 is making a gesture to indicate a choice of a control element for opening the garage door. Once such an input is detected data that defines a command can then be communicated by the user device 101 to the controller device 103. This controller device 103 is associated with the garage opener 104A. The garage door opener 104A can then be operated by the controller device 103.

“The user device 102 can interact and interact with any computing device that is able to interact with objects, as we have already stated. Any object that is network-connected can have its user interface elements displayed near it. Graphical elements similar in appearance to those shown above can be used to interact and control various objects, including the appliances illustrated in FIG. 1. As an example, graphic elements can be associated with the range 104B or the refrigerator 104C as described in this document.

“In other instances, the techniques described herein can also be used to control or communicate with controller devices embedded in other objects such as vehicles. FIG. 5A through FIG. 6 illustrates one example of such an application. FIG. 5A to FIG. 5C.”

“FIG. 5A illustrates an example view 500 which can be shown to the user 101 as he or she approaches a vehicle 51. FIG. FIG. 5B illustrates an example view 500 showing two graphical elements that are displayed to the user 101 after the vehicle 501 or associated computing device has been selected using the techniques herein. FIG. FIG. 5C illustrates the example view 500 that the user 101 can receive when the user 101 makes a gesture, causing the device 102 to send commands to the computing device associated to the vehicle 501.”

“As shown at FIG. 5B, the first graphical element 502A is displayed on the interface with a real-world view 501. The first graphical element 502A, which is also shown, can be used to show an association between the displayed status data, and a component, e.g. an attachment. The displayed status data describes one tire, and the graphical elements 502A and 503A are used to indicate that the attached data is between one of the tires and the displayed status data.

“Also seen in FIG. 5B shows a second graphical element 502B with selectable control elements. This includes buttons for opening and closing windows and locking doors. The selectable control elements in the second graphical element 502B are related to different components of the vehicle, e.g. the windows and doors.

“FIG. “FIG. You can indicate one or more associations with the graphical element. It can either be static or dynamic. The arrow may be displayed continuously or in response to a specific action. FIG. FIG. 5B and FIG. FIG. 5B and FIG. 5C illustrate how an association between an object or a component of an item and content can be displayed in response a selection, such as a user selection of one or more selectable controls elements. As shown in FIG. 5, the second graphical element 502B may be used as an example. 5B, and then switch to the configuration shown in FIG. 5C when one or more actions are taken, such as a user’s action.

“As shown at FIG. FIG. 5B and FIG. FIG. 5B and FIG. The second graphical element 502B in the example indicates an association with the window through the use of an Arrow. This example is intended to illustrate and not limit your imagination. You can see that any form, color or other indicator can be used to indicate an attachment between a control element and an object, or component of an object.

“In certain configurations, the methods disclosed herein may also create a graphic element to show the association between an object with a controller device103 associated with that object. A controller device 103 may not be visible when an object is inspected visually. The techniques described herein can be used to generate one or more graphics elements that represent a controller device. FIG. 1. If the fourth controller is embedded within the first lamp, 104D, then a graphical representation of the fourth controller can be generated near the real-world view for the first lamp, 104D. The graphical representation can also include one or more lines, or shapes, to indicate an association between an object with a controller device103. FIG. 1. A circle may be drawn in multiple lines to indicate an association between the rendering of controller device 103, and the object.

“The graphical renderings discussed herein allow a user to control multiple objects from a distance that is not directly visible to the device 102. Techniques disclosed herein can render images of the garage doors opener if the user is looking at the garage from the outside. Another example is that a user can look towards multiple light switches in a room using the user device 102. The renderings or real-world views of each switch can be displayed on the user device. Each switch can be controlled by the user via an input. One example is that the user could provide universal input commands, such as a gesture, voice command or voice command to?turn all lights on? or?turn off all lights? to control multiple controller devices 103.”

“In certain configurations, the methods disclosed herein allow a device provide multiple interfaces (e.g. graphical elements or holograms) in different locations that are attached one or more objects. A device may render one graphical element per switch in a room having three-way lights. These configurations allow a user to control only one object from two switches, such as a light. Another example is when the user looks at a house from an interface on a device 102. The user device can show a virtual thermostat for each room. You can configure each virtual thermostat to display status data, or one or more selectable graphics elements that allow you to control the thermostat. The user device 102 is capable of interpreting voice input to send commands to individual thermostats, groups of thermostats and/or all thermostats. In this example, a user can see multiple virtual thermostats attached to different rooms in a house and initiate an input command to that thermostat. For example, the user might say “turn down the temperature to the attic by five degrees.”

“Turning Now to FIG. “Turning now to FIG. 6, aspects of a routine 600 that provides a mixed environment display with attached control elements are displayed and described below. The operations of the methods described herein may not be presented in a particular order. However, it is possible to perform some or all of them in an alternate order. For simplicity of illustration and description, the operations are presented in the shown order. You can add, remove, or perform simultaneous operations without affecting the scope of the attached claims.

It is important to understand that the illustrated methods may be terminated at any point and not all of them have to be followed. Computer-readable instructions can be used to execute some or all of the operations and/or substantially identical operations. Computer-readable instructions is defined below. Computer-readable instructions and variations thereof are used as described in the claims and to encompass routines, applications and program modules. They also include programs, components and data structures. Computer-readable instructions can be implemented on various system configurations, including single-processor or multiprocessor systems, minicomputers, mainframe computers, personal computers, hand-held computing devices, microprocessor-based, programmable consumer electronics, combinations thereof, and the like.”

“It should be noted that the logical operations discussed herein can be implemented as either a sequence or program module running on a computer system or (2) as interconnected circuits or modules within a machine logic circuit. It is up to the computing system’s performance and other requirements for implementation. The logical operations described in this document are also referred to as states, operations and structural devices. These operations, acts, modules, and structural devices can be implemented in firmware, software, or any combination thereof.

“Example, the operations described in the routine 600 can be implemented at least partially by an application, component, and/or circuit such as the device module 112 and/or server module 107. The device module 111 or the server module107 may be either a dynamically linked (DLL) or a statically linked (SLL) configuration. This allows for functionality that is enabled by an app programing interface (API), which can include a compiled program, interpreted program, script, or any other executable set. Data such as input data 113 can be stored in a data format in one or more memory parts. You can retrieve the data from the structure by using links or references to it.

“The following illustration is for FIG. 1. and FIG. 2 can be seen that the operations of routine 600 could also be implemented in other ways. The routine 600, for example, could be executed at least partially by a processor on a remote computer or local circuit. A chipset, working in conjunction with other software modules, or alone, may also perform one or more operations of routine 600. This article may include any service, circuit, or application that can provide input data to indicate the state or position of any device.

“Refer to FIG. 6. The routine 600 starts at operation 601, in which the device module 110 receives the identification data from one or more remote devices. The identification data may include data that links aspects of an object to aspects of controller device 103. The identification data could include, for example, an address for controller device 103 such as an IP address or MAC address or any other network-based identifyr. An object or one of its characteristics can be linked to the address for controller device103. Identification data, for example, can be used to identify characteristics of objects, such as first lamp 104D. Identification data can be used to identify the characteristics of an object, such as colors, shapes and anomalies. These characteristics can be linked to an address of an associated device such as the fourth controller device (104D).

“In certain configurations, the location data and identification data can be used to associate an address with a controller device 103. The location data can be used to identify a particular controller device (103) and/or an object that is controlled by the controller device (103). The techniques described herein can use such information to locate a controller device’s address if the user device (102) detects that it is within a predetermined range.

The identification data can be provided remotely or manually. In some cases, the device that generates the identification data, such as the user device number 102, may do so. The initialization mode of the user device 102 is used to generate such data. In the initialization mode, the user can approach an object or look at it, and then provide input to tell the device to save data that defines one or more attributes of the object in a data format for future reference. FIG. 5A illustrates this. 5A shows how a user 101 might be viewing a vehicle 501 via an interface 118 on the user device. The user 101 may be looking at vehicle 501 through an interface 118 of the device 102. A voice command, hand gesture or other input can be used to direct storage of identification data. This data can include location data and/or metadata that defines one or more characteristics of an object. A device or the user 101 can also store and provide an address for the object’s computing device, e.g. the vehicle 501.

“While in initialization mode, in certain configurations, a sensor 120 can acquire an image of object, such as vehicle 501, using the user device 102. One or more object characteristics can be obtained from the image data. Image data, location data, or data describing one of the objects can all be linked to the controller device 103. These data structures are stored in memory and can be accessed in the future. These examples are intended to illustrate and not limit your ability to access them. You can see that other computers can generate identification data. This data may also include data indicating an object’s address or identifying the controller device.

“Once the identification data has been received or generated, the user can enter regular operation mode. The user device102 is used to select one or several objects and interact with them using an associated computing device103. These process blocks show how a user device102 selects an object and initiates communication of control data, status data and/or command data to control a particular aspect of the object.

“Returning at FIG. 6. At operation 603, device module 111 can select an item controlled by controller device 103. There are many ways to select an object. An example of this is an analysis of input data113 that defines any user activity (voice commands, gestures, gaze direction or other types of input). Another example is that an object can also be selected using input data 113 from one or more systems such as a social networking site, email system or phone system, instant messaging system or any other platform.

Input data 113, which can identify a gaze target or a gaze direction, can be used to analyze the input data and determine if there is an object (or controller device 103) associated with that object. Input data 113 can be used to indicate that 101 is looking at an object. This activity can be detected using image data captured by a sensor 120 and directed towards the field of view. One or more image processing techniques can identify one or more characteristics of an object and store them in a data structure. The data defining these characteristics can be analyzed, compared with the identification data discussed above, to identify an object and to obtain the address of an associated controller device (103).

“Referring to FIG. “Referring to FIG. It is assumed that the user device102 has identification data that defines a set of characteristics for the first lamp, 104D. The user device 102 is able to select the first lamp from 104D if the set characteristics of the identification data have a threshold amount in common with the set characteristics of input data 113. The identification data can be used to identify the controller device 103 that is associated with the object.

To select an object, you can also use location data. The techniques described herein can be used to select an object if the user device is within a predetermined distance. These techniques can be combined with data that defines the gaze direction. The techniques described herein can select an object if the user devices 102 are within a predetermined distance from it and if the view field of the device 102 points towards the object.

The input data 113 may include context data from remote sources in order to help you select an object. If a user does a lot of searches, comments on social networks, or causes communication about an object (such as a vehicle, appliance), such data can be used for identification and selection. These examples are intended to illustrate and should not be taken as exhaustive. You will see that input data 113 is possible to be retrieved from any resource or service, including a personal storage resource such as one offered by a remote or local storage service. One example is that a user may point at an object or make one or more gestures to indicate an interest in it. Data describing such user activity can then be used to identify and select an object. Another example is when a user communicates with a device (e.g. a phone or IM) to identify and select a specific object.

“Moreover, a combination of different input data 113 can also be used to identify an object. To identify and select an object, communication data or data identifying a target can be used. For identifying an object, keywords and other data, you can use the technologies that are available for analysing voice, text, images or any other data. These examples are intended to illustrate and not be taken as exhaustive. You can use any type of input data 113 to identify an object and/or a controller device 103. After an object has been selected, the address information of a computer that is associated with it can be obtained from any number of resources, including the identified data.

“Next, the device module 112 can receive control data (115) and status data (114) related to the selected object. Control data 115 may contain code, commands, or instructions that can all be executed by the controller device 103 for the selected object. The control data 115 could include code, object code and scripts. It can also contain commands, code, code, objects, scripts or any other executable set that can be executed by the controller device 103 associated to the selected object. Control data 115 may also be used to define communication interfaces such as an Application Programming Interface for a controller device (103) associated with the selected item.

“The status data (114) can contain any information about the selected object and/or a controller unit 103 that is associated with the object. If the object is a vehicle then the status data may describe the condition of the engine, tires, or other components of the vehicle. For later access, the control data 115 or status data 114 may be stored in a memory data structure or other computer-readable storage medium. The controller device 103 can provide the control data 115. The control data 115 may be sent to the device module 112 or to another component of user device 102, either as part of an initialization or as a response to a request made during user device 102.

“Next, the device module 112 can display one or more graphical elements that contain content such as status data or one or two selectable control elements. As shown in at most FIGS. A user device 102 may display status data, one or more selectable control element on the hardware display surface 118, as well as a view of the selected item. A rendering or real-world view can be included in the view of the selected object.

“Some configurations can be built on control data 115 from operation 605. The selectable elements can be configured to allow the generation of a command. Also known as an “input command”, This command can be sent to the controller device 103 that is associated with the object. A selectable control element can be used to respond to a command.

As stated above, graphical elements can be used to indicate an association between objects and their contents. A graphical element could be used to display status data and a view of the object. Another example is a graphical element that can display an association between an object’s view and the display of a control element. The view of an object that is associated with displayed content is called ‘attached’ for illustrative purposes. Display of the displayed contents.”

“As we have seen, the size, shape and color of a graphical element can be used to indicate an association between objects displayed content. FIG. 3B shows an example. 3B: The shape of a graphic element is used to display the association between a real-world view and the contents of the graphic element. FIG. 4B: The lines that run from the graphic element serve to illustrate the association between the real-world view and the garage door opener. As illustrated in FIG. FIG. 5B and FIG. 5B and FIG.

Operation 607 may also include one or more actions in response to the selection. In response to the selection, communication can be established between two or more devices. In response to the selection, an alert can be sent. It can be either an audio signal, an electronic action, or any other type of graphical display.

“Next, at operation 609, the device module 112 can be selected from a control element. As shown in FIGS. A user device 102 may receive input from a 101 user indicating that a selected control element has been selected. The sensor 120 can be a camera and/or one of the input devices 119 such as a microphone, keypad or keypad. This can generate input data 113 that can be used to define any form of user activity such as gestures, voice commands or gaze direction. A user device 102 can interpret the input data 113 to trigger one or more actions such as communication of a command with the selected control elements.

“At operation 611 the device module 11 communicates one or more commands with the controller device103 in relation to the selected object. The command may be sent in response to at least one selectable control item. You can communicate the command using any protocol and any format you prefer. One example is that the command can be sent to the API of controller device 103. Another example is to communicate the command using either a Web-based or message-based protocol.

Summary for “Mixed Environment Display of Attached Control Elements”

“Internet-connected devices are becoming more commonplace in everyday life. These devices allow users to communicate and control almost anything remotely. Internet-connected devices can control lights, thermostats and automobiles. There are many creative uses of Internet-connected devices in many industries.

While current technologies offer many benefits, product designers for Internet-connected devices still face many challenges. To make such devices more affordable and ubiquitous, they don’t often use expensive user interface components such as touch screens or display screens. Instead, they look for energy-efficient and low-cost options to increase their affordability. Common designs may use Web server software to allow users to interact with the device remotely via a Web browser. These configurations allow users to view and receive status information, and to send commands to the device via a Web page.

Web-based interfaces are more affordable than touch screens and display screens, but they don’t always offer a pleasant user experience when using these devices. When a user has to interact with many devices, managing tasks can be difficult. A user who manages hundreds of Internet-connected devices will need to keep Web address records and navigate to each one independently. Due to the low cost of many common designs, user interfaces created by Internet-connected devices don’t provide a good layout for status data and control buttons.

“It is with regard to these and other considerations, that the disclosure herein is presented.”

“The technologies described herein allow for a mixed environment display with attached control elements. Configurations allow users of a first computing devices to interact with remote computing devices that can control objects, such as light, appliances, appliances, or any other suitable object. Some configurations allow the first computing device to cause one or more actions. This includes the selection of an object and the display of a User Interface. The input data is captured and analysed, which defines the performance of one or several gestures. For example, a user can look at the object controlled from the second computing device. The displayed user interface can include graphical elements that control the object. In some cases, the elements can also be displayed with a real world view of the object. You can use the graphical elements to show associations between the displayed content and the real-world object view, such as status data or control buttons. Graphically attached user interface elements allow users to quickly identify the associations between objects observed and content displayed.

“In certain configurations, the first computing device (e.g. a head-mounted display (HMD)), may include a transparent section that allows a user to see objects through the surface’s hardware display surface. The hardware display surface can also display rendered elements over and around objects that are viewed through it. The control data can be obtained by the first computing device. This data can define one or more commands to control a second computing device. For example, the second computing device could be configured to control an object such as an appliance or lamp or garage door opener.

The first computing device may then display a graphic element that contains the one or more commands displayed on the hardware display. Through a transparent portion of the display surface, the graphical element can be displayed as a real-world view. The graphical element can also display status data from the second computing devices. The first computing device can interpret input commands or gestures from users to create data. The data that defines the input command can be communicated to the second computing devices by the first computing device. In response to the input command, the second computing device can control and manipulate the object.

The first computing device may also allow users to select remote computing devices by using natural gestures or other inputs. A user can choose the second computing gadget by looking at it or by controlling an object using the second computing devices through the display surface of the first computing apparatus. The first computing device can, among other actions, initiate communication with the second device and/or control an item by communicating with it.

“It is important to note that the subject matter described above may be used as a computer controlled apparatus, a process or a computing system. It could also be implemented in an article of manufacture, such as a computer readable medium, or as a computer control device. These and other features can be seen in the following detailed description and the accompanying drawings.

This Summary presents a few concepts in simplified form. They are described in detail below. This Summary does not identify the key features or essential features in the claimed subject matter. It is also not intended to limit the claims subject matter. The claimed subject matter does not include implementations that eliminate all or some of the disadvantages described in this disclosure.

“The technologies described herein allow for a mixed environment display with attached control elements. These techniques allow users to interact with a first computing device that controls an object. This could be a light, vehicle, thermostat or other suitable object. The configurations described herein allow the first computing device, through capture and analysis of input data, to perform one or more actions. The displayed user interface can include graphical elements that control the object. In some cases, the elements can also be displayed with a real world view of the object. You can use the graphical elements to show associations between the displayed content and the real-world object view, such as status data or control buttons. Graphically attached user interface elements allow users to quickly identify the associations between objects observed and content displayed.

“In certain configurations, the first computing device (e.g. a head-mounted display (HMD)), may include a transparent section that allows a user to see objects through the surface’s hardware display surface. The hardware display surface can also display rendered elements over and around objects that are viewed through it. The control data can be obtained by the first computing device. This data can define one or more commands to control a second computing device. A second computing device could be, for instance, a controller for a light or appliance or any other item that can controlled by a computer.

The first computing device may then display a graphic element that contains the one or more commands displayed on the hardware display. Through a transparent portion of the display surface, the graphical element can be displayed as a real-world view. The graphical element can also display status data from the second computing devices. The first computing device is able to interpret input or gestures from users and generate data that defines an input command. The data defining the input command can be communicated by the first computing device to the second computing devices for controlling and/or insinuating the object, at least partially, based on the gesture or input of the user.

“The first computing device may also allow users to select remote computing devices by using gestures or other input methods. A user can choose a remote computing gadget by simply looking through the display surface of the first computing devices. After selecting a remote computing devices, the first computing device can initiate communication and/or perform many interactions such as those described herein.

“Using the technologies described herein allows a user to interact with many remote computing devices without having to navigate through large volumes of machine addresses or credentials. The disclosed technologies allow a user to interact with remote devices selectively by looking at the object or device controlled by it.

An interactive, mixed-environment display allows a user to see graphical elements that contain status data and contextually relevant controls. They can also view the real-world object or device they are working with. The technologies described herein can improve a user’s interaction and productivity with one or more devices. This may include reducing the number of accidental inputs, decreasing the processing resource consumption, and minimizing the impact on network resources. Implementing the technologies described herein may also have other technical benefits.

It should be noted that the subject matter described above may be used as a computer controlled apparatus, a computer processing system, or an article of manufacture, such as a computer readable storage medium. These and other features can be seen in the following Detailed Description, as well as the accompanying drawings. The claimed subject matter does not limit to solutions to any or all of the disadvantages described in this disclosure.

“The subject matter discussed herein is presented in the context of techniques to provide a mixed environment display with attached control elements. However, it can be appreciated the techniques described herein could also apply to any scenario in which two or more people are communicating with each other.”

“As will become clearer herein, it is possible to appreciate that the implementations of these techniques and technologies may include solid state circuits and digital logic circuits. Computer component and/or software running on one or more devices. These signals may include digital and analog signals to communicate a change in state, movement or any data associated with the detection of motion. Any type of input device or sensor can be used to capture gestures.

The subject matter is described in the context of program module execution in conjunction with an operating system or application program on a computer. However, those who are skilled in the art will know that other implementations can be combined with other program modules. Program modules generally include routines, programs and components as well as data structures. These structures can be used to perform specific tasks or implement abstract data types. The art is also applicable to other configurations of computers. This includes multiprocessor systems, hand-held devices, handheld electronics, microprocessor-based consumer electronics, microprocessor-based, programmable consumer electronics, mainframe computers, minicomputers and the like.

“In the following detailed description, we refer to the accompanying drawings, which form a part of this document and in which specific configurations or examples are illustrated. In the drawings, like numerals refer to like elements throughout the various figures. They illustrate aspects of a computing device, computer-readable storage media, and computer-implemented methods for providing a mixed display environment of attached control elements. FIGS. will be discussed in greater detail. There are many services and applications that can implement the functionality and techniques described in FIGS. 7-9.

“FIG. “FIG. This invention is for providing a mixed environment display with attached control elements. One example environment 100 may include one or several servers 110, one, or more networks 150 and one or two user devices 102 that are associated with a user 101. The controller devices 103A to 103E can also be included (collectively,?controller device 103?). The user device 102, and controller devices 103 can be referred to as “computing devices” for illustrative purposes.

“FIG. “FIG. The first controller device, 103A, is designed to control and interact with a garage door opener (104A). FIG. 1. The second controller device, 103B, is designed to interact and control range 104B. The third controller device, 103C, is configured for interaction with and controlling refrigerator 104C. The fourth controller device, 103C, is designed to interact and control first lamp 104D. The fifth controller device, 103E, is designed to interact and control second lamp 104E. The garage door opener 104A and range 104B, refrigerators 104C, 104C, 104C, 104C, and 104C as well as the first lamp 104C and second lamp 104E, are referred to herein under the name “objects 104”. The techniques described herein allow users to interact with the objects 104 using gestures and other inputs.

“The FIG. “The example shown in FIG. 1 is intended to illustrate and not be taken as a limitation. The example environment 100 could include any number 103 of controller devices, any number 102 of user devices, any number 101 of users, any number 110 of servers, and/or any number 104 of objects. You can also see that objects could include other types of items than those in FIG. 1.”

“A controller device103 can be used as a standalone device or in conjunction with other computers such as servers 110 and other controller devices 103. The controller device 103 may be a single-board portable computer that can control other devices or objects. The RASPBERRY-PI is one example of a commercially accessible controller device. Other examples include PHOTON (WiFi) and ELECTRON (2G/3G cellular), both produced by PARTICLE.IO. The controller device 103 could also be a personal computer, or any other computing device that has components for communicating with a network or interacting with one or several objects.

The user device 102 is a standalone device that can be used alone or in conjunction with other computers such as servers 110 and other users devices 102. The user device 102 may be a personal computer or a wearable device, such as an HMD. It can also include components that allow communication with a network and for interaction with the user 101. The user device 102 can also be configured to accept input commands from 101. This includes gestures captured by an input device (such as a touchpad, camera, or keyboard).

“The user device (102), controller devices (103), servers 110, and/or other computers can be connected through one or more local/or large area networks, such the network 150. The computing devices can also communicate using any technology such as BLUETOOTH or WIFI, WIFI Direct, NFC, and any other suitable technology. This includes wired, light-based, or wireless technologies. You should know that there are many other types of connections than those described in this article.

“Servers 110 can be a personal computer or server farm or large-scale system, or any other computing system that has components for processing, coordinating and collecting data. The servers 110 may be linked to one or more service providers in some configurations. One service provider could be any company, person, or other entity that leases or shares computing resources to facilitate the disclosed techniques. Server 110 may also contain components and services such as the application service shown in FIG. 8 to execute one or more of the techniques described in this document.”

“Referring to FIG. “Referring now to FIG. 2, aspects of controller devices 103, user device 102 and servers 110 will be described in greater detail. A server 110 may include a local storage medium 180. This is also known as a computer-readable storage medium. It can store data such as input data 113, which can be generated and used by a device. The local memory 180 of the servers 110 can store status data 114 that is related to one or more computing device or an object. One or more servers 110 may store duplicates of data on controller devices103 and/or user devices102. This allows a central service to coordinate aspects between a number client computers such as controller devices103 and/or user devices102. You can see that servers 110 could store other types of data than the ones shown in FIG. 2.”

“In certain configurations, a server 110 may include a server module107 that can execute some or all of the techniques described in this document. An interface 118 can be added to server 110, which could include a screen to display data. Server 110 can also contain an input device 119. This could include a keyboard, mouse or microphone as well as any other device that is capable of generating signals and data to define any user interaction with server 110. Server 110 can be configured in certain configurations to allow remote access to the servers 110.

“In certain configurations, an individual controller device (103) can contain a local memory 180. Also known as a computer-readable storage medium (?computer-readable storage media?), stored data, input data 113 that can be generated and generated by a device. Local memory 180 can be used to store status data 114 for one or more controller devices 103. A controller module 121 can be stored in the local memory 180 of a controller device 103. It is designed to execute any or all of the techniques described. The controller module 121 can be used as a standalone module, or it can work in conjunction with other modules and computers such as the servers 110 or other user devices 102.

“The controller device 103’s local memory 180 can store control data 115. This can contain code, commands, and instructions that can be executed using a controller device.103. The control data 115 can contain commands, code and object code. It may also include scripts or code that can be executed by a controller device 103 to accomplish one or more tasks. Communication interfaces can be defined by the control data 115, such as the Application Programming Interface (API). This data, also known as “input command”, can allow remote computers to send command data to a remote computer. API to send data to controller device 103.

“Control data 115 may be provided by a controller device (103), or control data data 115 by another resource (115). This could include a service publishing aspects the control data. 115. The control data 115 can be obtained by a computing device such as the user device 102, which allows it to issue commands in accordance with control data 115. This allows the computing device to control aspects of controller device 103 and influence or control associated objects.

“A controller device 103 may also contain a control component 122, which allows for interaction with one or more objects.104 For example, a control component 122 can contain electrical relays that control power to one or several objects 104, actuators that control the movement of one, two, or all of the objects 104 and/or components 104, as well as any other device that allows control or communication with an object 104. A sensor 120 can be included in an individual controller device 103. For example, a sensor 120 can contain a camera or touch sensor as well as a proximity sensor or death field camera. It also includes any other input device that generates status data 114 about an object 104 or controller devices 103.

“Some configurations include a local memory 180, which stores input data 113 that can be generated either by a device or the user. The input data 113 may be generated or received by any of the components of the user-device 102. This includes a sensor 120 and an input device 119. An internal or external resource can also generate the input data 113, such as a GPS, compass or other suitable components, as shown in FIG. 9. The sensor 120 can be used as a location tracking device. One or more input devices (such as a camera or microphone) can generate input data.113. This can be used to define any form of user activity such as gestures, voice commands and gaze direction. The input data 113 may also be received by one or more systems such as an email system, search engine or social network. These systems include the 814-824 services and resources shown in FIG. 8. Based on input data 113 from multiple sources, one or more actions can be performed, including the selection or interaction of an object.

The memory 180 of the device 102 can be used to store status 112 related to one or more components and devices. Memory 180 can be used by the user device to store a device modules 111 that are designed to manage the techniques and interactions between the user 101 and the device 102. As will be explained below, the device module 112 can be used to communicate and process control data 115, status information 114, input data 113 and other data. The device module 111 may also be configured to execute surface reconstruction algorithms, other algorithms for locating and capturing images of objects 104. The device module 112 can be a productivity, game, or operating system component, as well as any other applications. The device module 111 allows a user to interact in a virtual environment or an augmented reality setting, as will be explained below.

“A user device 102 may include a hardware surface 118 (also known as interface 118?). Display renderings and other views as described in this document. One or more components can be included in the hardware display surface 118, including a projector or flat- or curved-screen screen or any other components that allow for a view of an object or data to be displayed to the user 101. The hardware display surface 118 can be configured to cover at most one eye in some configurations. One example is that the hardware display surface can be set up to cover both eyes for a user 101. One or more images can be rendered by the hardware display surface 118 to create a monocular, binocular or stereooscopic view of one or several objects.

“The hardware display surface 112 may be set up to allow 101 users to view objects in different environments. The hardware display surface 118 may display a rendering or image of an object in some configurations. A few configurations of the hardware surface 118 allow users 101 to see through selected sections. This allows them 101 to view objects in their environment. A’real-world view’ is the user’s perspective when looking at objects through the display surface 118. “Real-world view” of an object, or an object.

“The display surface for hardware 118 is described as having a field of view” Or a ‘field of vision? This can be defined as the area of observable that is visible through the display surface 118 at any moment. The direction of the field, or gaze direction, that an object sees when it is viewed through the hardware display surface is shown in the examples. Gaze direction data, which defines the direction of the field view of the hardware surface 118 and the field view of the user device (102), can be generated by any number of devices. These include a compass or GPS component, a camera, and/or a combination thereof for generating position and direction data. Gaze direction data can also be generated by analysing image data from objects with a known location.

“As we will describe in detail below, computer-generated renderings of objects or data can be displayed within, around, and near the transparent sections 118 of the hardware LCD display surface. This allows a user to see the computer generated renderings as well as a real-world view through selected portions of the display surface.

“Some configurations described in this document provide both a’see through display? An?augmented reality display? The?see-through display is used for illustration purposes. The?see through display may also include a transparent lens with content that can be displayed. The “augmented reality display” An opaque display may be used to display content on top of a rendering image. This may be any source such as a video feed taken from a camera that captures images of the environment around the user device. Some examples herein show a display that displays rendered content over an image. Some examples herein also describe methods that display rendered content on a?see-through display? The user can see the real-world object along with the content. The techniques described in this article can also be applied to a “see through” display. An?augmented reality display? is another example of a technique that can be used. Or variations and combinations thereof. Devices that enable a’see through display’ are shown for illustration. Devices that enable a?see through display? devices capable of creating a’mixed environment’? display.”

“A user device 101 can include an input device 120, such as a keyboard or mouse, microphone, camera, depth camera camera, touch sensor, camera, microphone, keyboard, and any other device that allows the generation of data characterizing interactions. FIG. 2. An input device 119 such as a microphone or camera can be placed on the front of user device 102, as shown in FIG.

“The user device 120 can include one or more sensors 120. These could include a sonar sensor or infrared sensor, compass or accelerometer. The sensor 120 can be used to generate data that characterizes interactions with the device 103, such as user gestures. One or more sensors 120, or an input device, 119, can be used to generate input data 113. This can define a position and other aspects of movement (e.g. speed, direction, acceleration) of one or several objects. These can include devices 103 and physical items located near a device, 103, or users 101. The input device 119 and/or the sensor 120 can also be used to generate input data 113 that defines the characteristics and presence of an object. Such components can be used to generate data that defines a characteristic, such as the color, size, shape or other physical characteristics of an object.

“Configurations allow the user device102 to capture, interpret and generate image data for the field of view described previously. A characteristic data can also include the object’s location. Data defining the location of an object can be generated using a variety of components such as a GPS, network, or one or more of the sensors described herein.

“In an illustrative case, the input devices 119 and 120 can generate input data 113 that identify the object that user 101 is looking at. This is also known as a ‘gaze target. A gaze target can be identified using sensors 120 and/or input device 119 in certain configurations. This data allows the user to identify the direction they are looking at, also known as the ‘gaze direction. A sensor 120, for example, can be mounted to the user device 101 and directed at a user’s field. The input device 119 or the sensor 120 can generate image data that can be used to analyze the data to determine whether an object is located in a predetermined area of the image data. A device can identify if an object is located within a predetermined area of at most one image, such the center of the image.

Data from various input devices 119, 120 and sensors 120 can be used to identify a gaze target or direction. A compass, positioning tracking device (e.g. a GPS component) and/or accelerometer can be used to produce data that indicates a gaze direction as well as data indicating the exact location of an object. These data can be used to determine if the object is a target. Data such as speed and direction data can also be used to identify a gaze direction and/or gaze target. If a user 101 observes a vehicle moving at a certain velocity and direction, this data can be transmitted to the user device102 from the vehicle. This data can then be combined with other data using known techniques to determine if the vehicle is a gaz target.

“In certain configurations, at least one sensor 120 can be pointed towards the user’s eye. To identify a gaze direction or target, data indicating the position and direction of at least one eye may be used. These configurations are useful when the user is viewing a rendering of an object on a surface like a hardware display surface. One example is that a HMD can have two distinct objects displayed on the hardware display surface. 120. The one or more sensors directed at at least one eye by the user can generate eye position data. This information indicates whether the user is viewing the first or second rendered object. Eye position data can be combined and other data, such gaze direction data, to identify a gaze target. This data can be based on either a rendered object on the surface 118 or an actual-world object seen through the surface 118.

“A user device 102, such a HMD, can allow the selection of an object (104), controlled by controller device 103. This is done by capturing and analysing input data 113 that defines a performance of one, more, or other forms input. The selection of an object may also refer to the selection 103 of a controller device associated with the object. 103 and 103 could also be used for the selection 103 of the object. The selection of object 104 causes user device 102, as will be explained in detail below to get address information for communicating with one or more controller devices 103.

The user device 102 is able to perform one or more actions once an object 104 has been selected. The user device 102, for example, can initiate communication with remote computing devices and/or cause display of one or more graphics elements that display content. This includes commands to control controller device 103. The user device 102 can also generate data that defines an input command in response a selected graphical element. Data defining the input commands may be sent to the controller device103 for execution. Execution of these commands can cause the controller device to control the object (104), obtain a status from it 104, or perform other interactions with it 104. The status data associated with controller device 103 can also be communicated to user device 102 to display it to user 101. FIG. FIG. 3D provides an illustration of such techniques. A user device 102 (such as an HMD) is used to interact with two controllers 103 and 103, respectively, associated with two lamps.

“FIG. 3A shows an example view 300, as seen from the perspective of user device 102. This HMD has the field of vision directed towards the first lamp (104C) and second lamp (104E), which collectively are referred to as “objects 104”. The view 300 shows a real-world view 104 of the objects through the display surface 118 on the user device 102. This example view 300 shows a configuration that involves a real-world view 104. However, the image data generated from a camera pointed at the field of view is used to render the objects 104 on the hardware display surface.

The user 101 can use the user device102 to select the object 104. This is done by directing the view of the device 102 towards the object 104. This can be done, for instance, by turning the field of view of the user device 102 towards an object of interest. In this case, the user 101 turns his field of vision toward the first lamp (104D). The techniques described herein allow you to select the first lamp (104D) and/or the associated controller device (e.g. the fourth controller device (103)D). The user device 102 may perform one or more actions once an object has been selected (e.g., the first lamp 104-62). The user device 102, for example, can display a graphic element, create an audio signal and/or notify the object selection. Another example is that the fourth controller can send control data to the user device 101D in response to selecting the object. This data may include one or more commands to control the fourth controller device (103D) and/or first lamp (104D). The control data from the fourth controller can be used to communicate instructions to control a component. For example, it could dim the lamp or turn the lamp on/off. Another example is that the user device102 can be notified of a selected object by receiving status data. This data defines the status of one or more components and devices.

“FIG. “FIG. When the first lamp is selected, it will display the displayed view to the user 101. The selection can trigger the display of one or several graphical elements and the real-world view for the first lamp 104D. A first graphical element (303A) indicates that the first lamp is 104D. The first graphical element (303A) is a combination of rendered lines and circles that indicates that the first lamp, 104D, has been turned on.

“Also seen in FIG. 3B, a second graphic element 303B contains status data and selectable controls elements for controlling the fourth controller device (103D) and the first lamp (104D). The status data for this example includes an identifier (104D) for the first lamp, e.g., “Lamp 1: Family room”. The status data also shows how many hours the lamp was used. A set of selectable control elements (305) can be included in the second graphical element, 303B. These elements allow for display of additional commands and/or status data, such as submenus (not illustrated). Based on control data, the second graphic element 303B may also include one or more selectable controls elements 306 to control the first lamp (104D) and/or fourth controller device (103D). After selecting one or more control elements 306, the user device 102 can process a command. The selectable control elements can be used in this example to turn on the lamp or change its brightness.

“As shown at FIG. 3B, the second graphic element 303B shows an association between the contents 303B and the real world view of the first lamp. The shape of the second graphic element 303B can be used to indicate one or more associations. The second graphical elements 303B are positioned around the first lamp (104D) to display the different associations. The association between the displayed content, the fourth controller devices 103 and the graphical element 303B may also be shown. This could include a rendering or real-world view for the fourth controllers device 103D. The graphical element’s shape is used to identify an object and its contents. However, any other indicator such as an audio signal or color can also be used.

“As we have seen, the user device 120 and input device 119 can be used to capture gestures or other inputs to generate input data.113. The input data 113 and control data 115 can be processed by the user device 102 to create a command that executes computer-executable instructions at controller device 103. FIG. FIG. 3C illustrates an example of how the user device (102) is used to capture the gestures of the user 101 in order to generate and communicate a command to control the fourth controller device (103D) and the first lamp (104D).

“FIG. 3C is the example view 300. The real-world view shows the first lamp, 104D. A user gesture interacts with the second graphic element 303B. The sensor 120, which is a camera that is directed towards the field of view of the user device102, captures the gesture of the user 101 in the present example. FIG. 3C shows that the user 101 is performing a gesture to indicate a selection of a control element configured to change the brightness of the first lamp 104D. 3C shows that the user 101 makes a gesture to select a control element which will change the brightness of the first lamp. Once such an input is detected data that defines a command can communicate from the user device 101 to the fourth controller device (103D) and the first lamp (104D).

“In certain configurations, the user device102 can alter the display of the graphic elements related to the first lamps 104D based upon one or more actions. The graphical elements can be removed in one example. When the user 101 turns away from the first lamp, 104D. Other user actions such as voice inputs, gestures, or any other type of input can also result in the removal of graphical elements. Other user actions can cause the device 102 take other actions, as will be explained in detail below.

The user 101 interacts with the second lamps 104E in the present example by looking at, for instance, the second lamp.102E and directing his field of vision away from the first lamp (104D) and towards the second lamp (104E). The techniques described herein can be used to detect such activity and remove the display of the first two first graphical elements (303A, 303B), select second lamp, 104E, and display additional graphical elements related second lamp, 104E, and fifth controller device, 103E.

“FIG. “FIG. When the second lamp is selected, 101 can be displayed. In the following example, the third graphical element (303C) indicates that the second lamp is 104E. In the illustrative case, the third graphic element 303C indicates that second lamp 104E has been turned off. A fourth graphical element, 303D, is also shown. It contains status data and commands to control the fifth controller device (103E) and second lamp (104E). The fourth graphical element, 303D, contains the identifier, status information, and selectable controls elements, as shown in FIG. 3B.”

“The methods described herein allow users to interact simultaneously with multiple objects or devices. In the example above, where the lamps are involved, the user can control the lamps using the view in FIG. 3A. The user device 102 can select both the lamps if they are in the same viewing area. One or more graphical elements may be displayed in such configurations to control both lamps. Alternativly, or in addition, the user device102 can generate a notification to indicate that both lamps have been selected. A notification can include the generation of an audio signal. For example, a voice that indicates which lamps have been selected. Or another signal that can be used to provide such a notification. The user device 102 is able to control both the lamps and the other objects once they have been selected. The user can use gestures or input commands, which could include a voice command to control the lamps. The user might say, “turn all lights off?” Or, the user could state,?turn all lights off? This example is intended to illustrate the concept and should not be taken as a limitation. It is clear that the configurations described herein can be used for controlling any number of objects or remote controller devices 103.

Consider the following scenario: A user stands outside a house. A number of objects can be selected by looking at the house from a distance. To control specific objects or groups of objects, the user can use gestures or other input. The user can also voice-command to turn off all lights. Or?turn all lights off? You can also point your finger at the thermostat and say?turn the heat up until 71 degrees? You can use one or more technologies to interpret such inputs to select and control objects and categories by sending appropriate commands to controller devices 103.

“FIG. 4A to FIG. 4A through FIG. 4C show another example of a user device (in the form HMD) that is used to interact the garage door opener, 104A. This illustrative example shows how the user device (102) is directed to the garage opener, 104A. It creates a real-world view through the interface 118. FIG. FIG. 4A illustrates an example view 400 which can be displayed to user 101 in this scenario. The user device 102 has the ability to select the garage opener 104A and/or an accompanying computing device, e.g. the first controller device (103A), when such a scenario can be detected using one or more of the techniques described herein. The user device 102 may perform one or more actions upon selecting the garage opener 104A. This could include the display of a graphic element that contains a selectable element to control the garage opener 104A.

“FIG. “FIG. The user device 102 may cause the display of a graphic element 402 in addition to the real-world view for the garage opener, 104A. The graphical element 402 in this example includes selectable control elements. These control elements allow the user device to control the garage opener 104A. One or more of the selectable control elements can then be selected to cause the garage door’s opening or closing. A selectable control element can also be selected to initiate a software update for garage opener 104A or the first controller device, 103A. The graphical element 402 can contain content such as status data. FIG. 4B indicates a maintenance due date as well as data defining recent user activity. This example shows that the user activity identifies the garage opener 104A user and a timestamp of the last time the garage opener 104A was used.

“As illustrated in FIG. 4B, the graphical elements 402 are designed to indicate an association between the graphic element 402 (FIG. The graphical element 402’s outline is shown in this example. It contains lines that point toward the garage door opener 104A. While lines are used to indicate an association between the contents 402 and garage door openers 104A, it is easy to see that any shape, color or other indicator could be used to indicate the association between objects, such as garage door openers 104A and other content.

“FIG. 3C is an example view 400 of a real-world garage door opener 104A. It shows a gesture interaction between the user and the graphical element 402. The sensor 120 on the user device captures the gesture of the user 101 in the present example. FIG. 3C shows that the user 101 is making a gesture to indicate a choice of a control element for opening the garage door. Once such an input is detected data that defines a command can then be communicated by the user device 101 to the controller device 103. This controller device 103 is associated with the garage opener 104A. The garage door opener 104A can then be operated by the controller device 103.

“The user device 102 can interact and interact with any computing device that is able to interact with objects, as we have already stated. Any object that is network-connected can have its user interface elements displayed near it. Graphical elements similar in appearance to those shown above can be used to interact and control various objects, including the appliances illustrated in FIG. 1. As an example, graphic elements can be associated with the range 104B or the refrigerator 104C as described in this document.

“In other instances, the techniques described herein can also be used to control or communicate with controller devices embedded in other objects such as vehicles. FIG. 5A through FIG. 6 illustrates one example of such an application. FIG. 5A to FIG. 5C.”

“FIG. 5A illustrates an example view 500 which can be shown to the user 101 as he or she approaches a vehicle 51. FIG. FIG. 5B illustrates an example view 500 showing two graphical elements that are displayed to the user 101 after the vehicle 501 or associated computing device has been selected using the techniques herein. FIG. FIG. 5C illustrates the example view 500 that the user 101 can receive when the user 101 makes a gesture, causing the device 102 to send commands to the computing device associated to the vehicle 501.”

“As shown at FIG. 5B, the first graphical element 502A is displayed on the interface with a real-world view 501. The first graphical element 502A, which is also shown, can be used to show an association between the displayed status data, and a component, e.g. an attachment. The displayed status data describes one tire, and the graphical elements 502A and 503A are used to indicate that the attached data is between one of the tires and the displayed status data.

“Also seen in FIG. 5B shows a second graphical element 502B with selectable control elements. This includes buttons for opening and closing windows and locking doors. The selectable control elements in the second graphical element 502B are related to different components of the vehicle, e.g. the windows and doors.

“FIG. “FIG. You can indicate one or more associations with the graphical element. It can either be static or dynamic. The arrow may be displayed continuously or in response to a specific action. FIG. FIG. 5B and FIG. FIG. 5B and FIG. 5C illustrate how an association between an object or a component of an item and content can be displayed in response a selection, such as a user selection of one or more selectable controls elements. As shown in FIG. 5, the second graphical element 502B may be used as an example. 5B, and then switch to the configuration shown in FIG. 5C when one or more actions are taken, such as a user’s action.

“As shown at FIG. FIG. 5B and FIG. FIG. 5B and FIG. The second graphical element 502B in the example indicates an association with the window through the use of an Arrow. This example is intended to illustrate and not limit your imagination. You can see that any form, color or other indicator can be used to indicate an attachment between a control element and an object, or component of an object.

“In certain configurations, the methods disclosed herein may also create a graphic element to show the association between an object with a controller device103 associated with that object. A controller device 103 may not be visible when an object is inspected visually. The techniques described herein can be used to generate one or more graphics elements that represent a controller device. FIG. 1. If the fourth controller is embedded within the first lamp, 104D, then a graphical representation of the fourth controller can be generated near the real-world view for the first lamp, 104D. The graphical representation can also include one or more lines, or shapes, to indicate an association between an object with a controller device103. FIG. 1. A circle may be drawn in multiple lines to indicate an association between the rendering of controller device 103, and the object.

“The graphical renderings discussed herein allow a user to control multiple objects from a distance that is not directly visible to the device 102. Techniques disclosed herein can render images of the garage doors opener if the user is looking at the garage from the outside. Another example is that a user can look towards multiple light switches in a room using the user device 102. The renderings or real-world views of each switch can be displayed on the user device. Each switch can be controlled by the user via an input. One example is that the user could provide universal input commands, such as a gesture, voice command or voice command to?turn all lights on? or?turn off all lights? to control multiple controller devices 103.”

“In certain configurations, the methods disclosed herein allow a device provide multiple interfaces (e.g. graphical elements or holograms) in different locations that are attached one or more objects. A device may render one graphical element per switch in a room having three-way lights. These configurations allow a user to control only one object from two switches, such as a light. Another example is when the user looks at a house from an interface on a device 102. The user device can show a virtual thermostat for each room. You can configure each virtual thermostat to display status data, or one or more selectable graphics elements that allow you to control the thermostat. The user device 102 is capable of interpreting voice input to send commands to individual thermostats, groups of thermostats and/or all thermostats. In this example, a user can see multiple virtual thermostats attached to different rooms in a house and initiate an input command to that thermostat. For example, the user might say “turn down the temperature to the attic by five degrees.”

“Turning Now to FIG. “Turning now to FIG. 6, aspects of a routine 600 that provides a mixed environment display with attached control elements are displayed and described below. The operations of the methods described herein may not be presented in a particular order. However, it is possible to perform some or all of them in an alternate order. For simplicity of illustration and description, the operations are presented in the shown order. You can add, remove, or perform simultaneous operations without affecting the scope of the attached claims.

It is important to understand that the illustrated methods may be terminated at any point and not all of them have to be followed. Computer-readable instructions can be used to execute some or all of the operations and/or substantially identical operations. Computer-readable instructions is defined below. Computer-readable instructions and variations thereof are used as described in the claims and to encompass routines, applications and program modules. They also include programs, components and data structures. Computer-readable instructions can be implemented on various system configurations, including single-processor or multiprocessor systems, minicomputers, mainframe computers, personal computers, hand-held computing devices, microprocessor-based, programmable consumer electronics, combinations thereof, and the like.”

“It should be noted that the logical operations discussed herein can be implemented as either a sequence or program module running on a computer system or (2) as interconnected circuits or modules within a machine logic circuit. It is up to the computing system’s performance and other requirements for implementation. The logical operations described in this document are also referred to as states, operations and structural devices. These operations, acts, modules, and structural devices can be implemented in firmware, software, or any combination thereof.

“Example, the operations described in the routine 600 can be implemented at least partially by an application, component, and/or circuit such as the device module 112 and/or server module 107. The device module 111 or the server module107 may be either a dynamically linked (DLL) or a statically linked (SLL) configuration. This allows for functionality that is enabled by an app programing interface (API), which can include a compiled program, interpreted program, script, or any other executable set. Data such as input data 113 can be stored in a data format in one or more memory parts. You can retrieve the data from the structure by using links or references to it.

“The following illustration is for FIG. 1. and FIG. 2 can be seen that the operations of routine 600 could also be implemented in other ways. The routine 600, for example, could be executed at least partially by a processor on a remote computer or local circuit. A chipset, working in conjunction with other software modules, or alone, may also perform one or more operations of routine 600. This article may include any service, circuit, or application that can provide input data to indicate the state or position of any device.

“Refer to FIG. 6. The routine 600 starts at operation 601, in which the device module 110 receives the identification data from one or more remote devices. The identification data may include data that links aspects of an object to aspects of controller device 103. The identification data could include, for example, an address for controller device 103 such as an IP address or MAC address or any other network-based identifyr. An object or one of its characteristics can be linked to the address for controller device103. Identification data, for example, can be used to identify characteristics of objects, such as first lamp 104D. Identification data can be used to identify the characteristics of an object, such as colors, shapes and anomalies. These characteristics can be linked to an address of an associated device such as the fourth controller device (104D).

“In certain configurations, the location data and identification data can be used to associate an address with a controller device 103. The location data can be used to identify a particular controller device (103) and/or an object that is controlled by the controller device (103). The techniques described herein can use such information to locate a controller device’s address if the user device (102) detects that it is within a predetermined range.

The identification data can be provided remotely or manually. In some cases, the device that generates the identification data, such as the user device number 102, may do so. The initialization mode of the user device 102 is used to generate such data. In the initialization mode, the user can approach an object or look at it, and then provide input to tell the device to save data that defines one or more attributes of the object in a data format for future reference. FIG. 5A illustrates this. 5A shows how a user 101 might be viewing a vehicle 501 via an interface 118 on the user device. The user 101 may be looking at vehicle 501 through an interface 118 of the device 102. A voice command, hand gesture or other input can be used to direct storage of identification data. This data can include location data and/or metadata that defines one or more characteristics of an object. A device or the user 101 can also store and provide an address for the object’s computing device, e.g. the vehicle 501.

“While in initialization mode, in certain configurations, a sensor 120 can acquire an image of object, such as vehicle 501, using the user device 102. One or more object characteristics can be obtained from the image data. Image data, location data, or data describing one of the objects can all be linked to the controller device 103. These data structures are stored in memory and can be accessed in the future. These examples are intended to illustrate and not limit your ability to access them. You can see that other computers can generate identification data. This data may also include data indicating an object’s address or identifying the controller device.

“Once the identification data has been received or generated, the user can enter regular operation mode. The user device102 is used to select one or several objects and interact with them using an associated computing device103. These process blocks show how a user device102 selects an object and initiates communication of control data, status data and/or command data to control a particular aspect of the object.

“Returning at FIG. 6. At operation 603, device module 111 can select an item controlled by controller device 103. There are many ways to select an object. An example of this is an analysis of input data113 that defines any user activity (voice commands, gestures, gaze direction or other types of input). Another example is that an object can also be selected using input data 113 from one or more systems such as a social networking site, email system or phone system, instant messaging system or any other platform.

Input data 113, which can identify a gaze target or a gaze direction, can be used to analyze the input data and determine if there is an object (or controller device 103) associated with that object. Input data 113 can be used to indicate that 101 is looking at an object. This activity can be detected using image data captured by a sensor 120 and directed towards the field of view. One or more image processing techniques can identify one or more characteristics of an object and store them in a data structure. The data defining these characteristics can be analyzed, compared with the identification data discussed above, to identify an object and to obtain the address of an associated controller device (103).

“Referring to FIG. “Referring to FIG. It is assumed that the user device102 has identification data that defines a set of characteristics for the first lamp, 104D. The user device 102 is able to select the first lamp from 104D if the set characteristics of the identification data have a threshold amount in common with the set characteristics of input data 113. The identification data can be used to identify the controller device 103 that is associated with the object.

To select an object, you can also use location data. The techniques described herein can be used to select an object if the user device is within a predetermined distance. These techniques can be combined with data that defines the gaze direction. The techniques described herein can select an object if the user devices 102 are within a predetermined distance from it and if the view field of the device 102 points towards the object.

The input data 113 may include context data from remote sources in order to help you select an object. If a user does a lot of searches, comments on social networks, or causes communication about an object (such as a vehicle, appliance), such data can be used for identification and selection. These examples are intended to illustrate and should not be taken as exhaustive. You will see that input data 113 is possible to be retrieved from any resource or service, including a personal storage resource such as one offered by a remote or local storage service. One example is that a user may point at an object or make one or more gestures to indicate an interest in it. Data describing such user activity can then be used to identify and select an object. Another example is when a user communicates with a device (e.g. a phone or IM) to identify and select a specific object.

“Moreover, a combination of different input data 113 can also be used to identify an object. To identify and select an object, communication data or data identifying a target can be used. For identifying an object, keywords and other data, you can use the technologies that are available for analysing voice, text, images or any other data. These examples are intended to illustrate and not be taken as exhaustive. You can use any type of input data 113 to identify an object and/or a controller device 103. After an object has been selected, the address information of a computer that is associated with it can be obtained from any number of resources, including the identified data.

“Next, the device module 112 can receive control data (115) and status data (114) related to the selected object. Control data 115 may contain code, commands, or instructions that can all be executed by the controller device 103 for the selected object. The control data 115 could include code, object code and scripts. It can also contain commands, code, code, objects, scripts or any other executable set that can be executed by the controller device 103 associated to the selected object. Control data 115 may also be used to define communication interfaces such as an Application Programming Interface for a controller device (103) associated with the selected item.

“The status data (114) can contain any information about the selected object and/or a controller unit 103 that is associated with the object. If the object is a vehicle then the status data may describe the condition of the engine, tires, or other components of the vehicle. For later access, the control data 115 or status data 114 may be stored in a memory data structure or other computer-readable storage medium. The controller device 103 can provide the control data 115. The control data 115 may be sent to the device module 112 or to another component of user device 102, either as part of an initialization or as a response to a request made during user device 102.

“Next, the device module 112 can display one or more graphical elements that contain content such as status data or one or two selectable control elements. As shown in at most FIGS. A user device 102 may display status data, one or more selectable control element on the hardware display surface 118, as well as a view of the selected item. A rendering or real-world view can be included in the view of the selected object.

“Some configurations can be built on control data 115 from operation 605. The selectable elements can be configured to allow the generation of a command. Also known as an “input command”, This command can be sent to the controller device 103 that is associated with the object. A selectable control element can be used to respond to a command.

As stated above, graphical elements can be used to indicate an association between objects and their contents. A graphical element could be used to display status data and a view of the object. Another example is a graphical element that can display an association between an object’s view and the display of a control element. The view of an object that is associated with displayed content is called ‘attached’ for illustrative purposes. Display of the displayed contents.”

“As we have seen, the size, shape and color of a graphical element can be used to indicate an association between objects displayed content. FIG. 3B shows an example. 3B: The shape of a graphic element is used to display the association between a real-world view and the contents of the graphic element. FIG. 4B: The lines that run from the graphic element serve to illustrate the association between the real-world view and the garage door opener. As illustrated in FIG. FIG. 5B and FIG. 5B and FIG.

Operation 607 may also include one or more actions in response to the selection. In response to the selection, communication can be established between two or more devices. In response to the selection, an alert can be sent. It can be either an audio signal, an electronic action, or any other type of graphical display.

“Next, at operation 609, the device module 112 can be selected from a control element. As shown in FIGS. A user device 102 may receive input from a 101 user indicating that a selected control element has been selected. The sensor 120 can be a camera and/or one of the input devices 119 such as a microphone, keypad or keypad. This can generate input data 113 that can be used to define any form of user activity such as gestures, voice commands or gaze direction. A user device 102 can interpret the input data 113 to trigger one or more actions such as communication of a command with the selected control elements.

“At operation 611 the device module 11 communicates one or more commands with the controller device103 in relation to the selected object. The command may be sent in response to at least one selectable control item. You can communicate the command using any protocol and any format you prefer. One example is that the command can be sent to the API of controller device 103. Another example is to communicate the command using either a Web-based or message-based protocol.

Click here to view the patent on Google Patents.