Invented by Jeffrey M. Faulkner, Israel Pastrana Vicente, Pol Pla I. Conesa, Stephen O. Lemay, Robert T. Tilton, William A. Sorrentino, III, Kristi E. S. BAUERLY, Apple Inc
The Apple Inc invention works as follows
When displaying a 3-D environment, a computer detects the hand in a first-position that corresponds to an area of the 3D environment. The computer system, upon detecting a hand in the first position, displays a visual indicator of a gesture input context using hand gestures if it determines that the hand has been held in a predefined first configuration. If the hand does not match the predefined first configuration, then the computer system ignores the display of the visual.Background for Devices and methods for interacting with 3D environments
In recent years, the development of computer systems to support augmented reality has grown significantly. Examples of augmented reality environments include virtual elements that replace or enhance the physical world. Interacting with virtual/augmented realities environments requires input devices such as cameras and controllers for computer systems, touch-sensitive surfaces and touch-screen displays. Virtual elements can include images, videos, text, buttons, and other graphics.
But the methods and interfaces used to interact with environments that contain at least some virtual elements, such as applications, augmented realities, mixed reality, and virtual environments, are cumbersome and inefficient. Insufficient feedback on actions performed with virtual objects; systems that require multiple inputs in order to achieve the desired outcome in an enhanced reality environment; and systems where manipulation of virtual items is complex, tedious, and error-prone are all examples of systems that create a cognitive burden for a user and reduce the enjoyment of the virtual/augmented environment. These methods also take more time than needed, which wastes energy. “This last consideration is especially important for battery-operated gadgets.
There is a requirement for computer systems that have improved methods and user interfaces to provide computer generated experiences for users. This will make the interaction with computer systems easier and more intuitive. These methods and interfaces can be used to complement or replace existing methods of providing computer-generated reality experiences for users. These methods and interfaces help the user understand the relationship between inputs provided and the device’s responses, thus creating a more efficient interface.
The disclosed systems reduce or eliminate the above deficiencies as well as other problems that are associated with user interfaces of computer systems with display generation components and one or multiple input devices. In some embodiments the computer system is desktop computer with an attached display. In some embodiments the computer system can be a portable device (e.g. a laptop computer, tablet PC, or handheld device). In some embodiments the computer system can be a personal electronic gadget (e.g. a wearable device such as a wristwatch or head-mounted device). In some embodiments the computer system includes a touchpad. In some embodiments the computer system includes one or more cameras. In certain embodiments, the system includes a touch-sensitive screen (also called a “touch screen”). or ?touch-screen display?). In certain embodiments, the system includes one or more components for eye tracking. In some embodiments the computer system includes one or more components for hand tracking. In some embodiments the computer system includes one or multiple output devices, in addition to its display generation component. The output devices include one or several tactile output generators as well as one or many audio output devices. In some embodiments the computer system includes a graphical interface (GUI), a processor, memory, and modules, programs, or sets of instructions that are stored in the memory to perform multiple functions. In some embodiments the user interacts the GUI by using a stylus or finger contact and gestures. Other ways to interact with the GUI include movement of the hands and eyes in relation to the GUI, or by the body, as detected by cameras or other sensors. Voice inputs are also captured by audio input devices. In some embodiments the functions that can be performed by the interactions include, but are not limited to, image editing, drawing and presenting, word-processing, spreadsheet creation, game play, telephoning video conferencing, emailing, instant messaging as well as workout support. The executable instructions to perform these functions can be included, if desired, in a nontransitory computer-readable storage medium, or another computer program product configured to execute by one or multiple processors.
There is a requirement for electronic devices that have improved interfaces and methods for interacting in a three-dimensional space. These methods and interfaces can be used to complement or replace existing methods of interacting with three-dimensional environments. These methods and interfaces can reduce the amount, nature, or extent of inputs by the user, and create a more efficient interface between the human and machine.
The method includes: “According to some embodiments, the method is performed on a computer system that includes a display generation component, one or multiple cameras, and displaying a view in three dimensions; while displaying this view, detecting a movement of a thumb of a user over an index finger in the first-hand of the users using the cameras; upon detecting the motion of the thumb of a user over the finger in the first-hand using the cameras: performing a 1st operation if the determination is made that the movement was determined to be a
In accordance to some embodiments, the method includes displaying an image of a three dimensional environment and detecting a user’s hand in a position that corresponds with a portion of that three dimensional environment. After detecting the user’s hand in that first position, the following is done: if the hand has been detected in a predefined first configuration, the display a visual indicator of a gesture input context using hand gestures using the three dimensional environment. If the hand was not detected in that first position,
The method includes: “According to some embodiments, the method is performed on a computer system that includes a display generation element and one or more interfaces. It involves: Displaying a three-dimensional world, which may include displaying an image of a physical space; detecting a gestural gesture while the representation is displayed; and, in response to the gesture detection, in accordance to a determination of whether the user’s eye is directed at a location that is predefined in the environment, displaying in
In accordance to some embodiments, an electronic device includes a display generation element and one or multiple input devices. The method includes: displaying a virtual environment in three dimensions, which includes one or several virtual objects, and detecting gaze directed towards a first item in the environment in three dimensions, wherein the gaze is directed to the object and it is responsive to one or two gesture inputs; and, in response to the gaze meeting the criteria and being toward the object that is responsive to one or two gesture inputs: in accordance to
There is a requirement for electronic devices that have improved interfaces and methods for the user to interact with the three-dimensional environment. These methods and interfaces can be used to complement or replace existing methods of enabling the user to interact with the three-dimensional environment using electronic devices. These methods and interfaces allow for a more efficient user-machine interaction and give the user greater control over the device. They also allow them to operate the device in a safer manner, with reduced cognitive load, and an improved user experience.
While displaying the first three-dimensional view including the pass through portion, the method includes: detecting a hand movement on a housing physically coupled to the display component. If the change is determined to meet first criteria, the method then involves replacing the first three-dimensional view with a virtual version of it.
The method includes: “In some embodiments a computer system with a display generation element and an input device displays a view a virtual world, while not including a visual representation for a portion of a physically present first object in a real-world environment where a user may be located; detecting a movement by the person in that environment”; and, in response to that movement, if it is determined that the individual is within a threshold range of that portion, and the object is a portion that could be visible to based upon the field
According to some embodiments, computer systems include a display-generation component (e.g. a display or projector), a sensor for detecting intensity of contact with the touch-sensitive surface, and optionally one or more sensors. One or several input devices, such as one camera, a touch sensitive surface, or optionally sensors that detect the intensity of contact with the surface, or one tactile output generator, are included in the computer system. The memory stores one of more programs, which can be executed by one of more processors. According to some embodiments, instructions are stored on a nontransitory computer-readable storage medium that, when executed by an electronic device with a display generator component, one of more input devices, such as one or two cameras, or a touch sensitive surface, and optionally a sensor to detect intensities, will cause the system to perform the operations described in the present invention. According to some embodiments, the graphical user interface of a computer system that includes a display generator component, one of more input devices, such as one of more cameras, or a touch-sensitive area, with optionally one of more sensors for detecting intensities of contact with the touch sensitive surface, and optionally tactile output generation, with one of more processors to run one of more programs in the memory, may include one of more elements shown in any method described herein. These elements are updated in response inputs in accordance According to some embodiments, the computer system comprises a display generator component, one of more input devices (e.g. one or two cameras, a touchscreen, optionally one sensor for detecting intensities of contact with the touchscreen), and optionally a tactile output device. It also includes means that perform or cause the performance of any method described herein. According to some embodiments, information processing apparatuses for use in computer systems with a display generator component, one of more input devices (e.g. one or two cameras, a touchscreen, optionally one sensor for detecting intensities of contact with the touchscreen), and optionally tactile outputs generators include means for performing or causing the performance of any of these methods.
Computer systems with display-generation components have improved interfaces and methods for interacting in a three-dimensional world and increasing user satisfaction and safety. These methods and interfaces can be used to complement or replace existing methods of interacting in a three-dimensional space and improving the user’s experience with computer systems.
Note that any of the embodiments described in this document can be combined together with those described above.” The specification does not include all features and benefits. A person of ordinary skill will see many more features and benefits in the drawings, specifications, and claims. It should also be noted that language in the specification was chosen primarily for its readability and educational purposes and not necessarily to define or limit the inventive subject matter.
Click here to view the patent on Google Patents.