Invented by James M. Powderly, Savannah Niles, Nicole Elizabeth Samec, Ali Amirhooshmand, Nastasja U. Robaina, Christopher M. Harrises, Mark Baerenrodt, Carlos A. Rivera Cintron, Brian Keith Smith, Magic Leap Inc

The market for automatic control based on external conditions of wearable display devices is rapidly growing. Wearable display devices are becoming increasingly popular among consumers due to their convenience and versatility. These devices are equipped with sensors that can detect external conditions such as temperature, humidity, and light, and adjust the display accordingly. This technology is known as automatic control, and it is revolutionizing the way we interact with wearable display devices. One of the main advantages of automatic control is that it enhances the user experience. For example, if a user is wearing a smartwatch and steps outside into bright sunlight, the display will automatically adjust to make it easier to read. This means that the user does not have to manually adjust the display, which can be inconvenient and time-consuming. Automatic control also helps to conserve battery life by reducing the brightness of the display in low light conditions. Another advantage of automatic control is that it can improve the accuracy of data collected by wearable devices. For example, if a fitness tracker is equipped with sensors that can detect external conditions, it can provide more accurate data about the user’s activity level. This can help users to better understand their fitness goals and make more informed decisions about their health. The market for automatic control based on external conditions of wearable display devices is expected to grow significantly in the coming years. According to a report by MarketsandMarkets, the global market for wearable technology is expected to reach $54 billion by 2023, with a compound annual growth rate of 15.5%. This growth is driven by the increasing demand for wearable devices that can provide real-time data and improve the user experience. In conclusion, the market for automatic control based on external conditions of wearable display devices is a rapidly growing industry that is transforming the way we interact with technology. This technology is enhancing the user experience, improving the accuracy of data collected by wearable devices, and driving growth in the global market for wearable technology. As this technology continues to evolve, we can expect to see even more innovative applications in the future.

The Magic Leap Inc invention works as follows

Embodiments for a wearable device may include a HMD that can display virtual content. The wearable device may trigger an event while the user interacts with virtual content. This could be an emergency or unsafe situation, detecting one of more trigger objects in the environment or determining the characteristics of the environment of the user (e.g. home or office). Wearable devices can detect the event automatically and control the HMD automatically to block or stop the display of virtual content. The HMD can include a button which the user can activate to manually deemphasize or block the virtual content.

Background for Automatic control based on external conditions of wearable display devices

Modern computing and display technology have made it possible to create systems that can be called “virtual reality”, “augmented reality?”, or “mixed reality?” Digitally reproduced images and portions of them are presented to users in a way that makes them appear real or can be perceived as such. A virtual reality (or?VR?) scenario usually presents digital or virtual information without transparency to any other real-world visual input. An augmented reality (or?AR?) scenario typically presents digital or virtual information as an enhancement to the visualization of the real world around the user. A mixed reality (or?MR?) scenario involves merging real and digital worlds to create new environments in which physical and virtual objects coexist and interact in real-time. It turns out that the human visual system is complex. Therefore, it is difficult to create a VR, AR or MR technology that allows for a natural-feeling, rich presentation and interaction of virtual images elements with real-world or virtual imagery elements. The systems and methods described herein address various issues related to VR, AR, and MR technology.

Embodiments for a wearable device may include a HMD that can display virtual content. The wearable device can detect a trigger event while the user interacts with virtual content. This could be an emergency or unsafe situation, detecting a trigger object in the environment or detecting the user entering a certain environment (e.g. home or office). Wearable devices can detect the event automatically and control the HMD automatically to block or stop the display of virtual content. The HMD can include a button which the user can press to manually deemphasize or block the virtual content. In some implementations, the wearable can restore or resume the virtual content when a termination condition is detected.

Details about one or more implementations are provided in the accompanying drawings as well as the description. The claims, drawings, and description will reveal other features, aspects, or advantages. This summary and the detailed description below do not attempt to limit or define the scope of the inventive subject matter.

Overview

The display system of an wearable device may be configured to show virtual content within an AR/VR/MR context. Virtual content can be visual or audible. The user of a HMD may find themselves in situations where it is desirable to de-emphasize or remove all virtual content. In an emergency or unsafe situation, for example, it may be desirable to focus the user’s attention on the physical reality and not the virtual content. In these conditions, the presentation of virtual contents to the user can cause confusion in the perception of both the physical content from the real world and the virtual content provided by HMD. As described below, HMD embodiments may allow manual or automatic control in situations where it is desirable to reduce the emphasis or stop displaying virtual content.

Furthermore, although the wearable device may present a wealth of information to the user, it can be difficult for the person to sort through the virtual content and identify the content they are interested in. In some embodiments, a wearable device is able to automatically detect the location of the users and selectively block or allow virtual content depending on that location. This allows the wearable to present virtual content more relevant to the users and more appropriate to their environment (e.g. location), such as when the user is home or at work. Wearable devices can display virtual content related to video games or scheduled conference calls. They can also present virtual content related to work emails. In an office setting, the wearable device can present virtual content relating to work-related content such as emails and conference calls, but it may also block content related to games.

In certain implementations, a wearable device may be able to detect automatically a change in the location of a user based on images acquired by a system that faces outwards (alone or combined with a sensor for location). Wearable devices can apply settings based on the location of the user when they detect that the user is moving from one environment into another. In some implementations, the wearable device can muffle virtual content depending on the environment of the user (also known as scenes). A living room at home or a shopping mall, for example, may be considered entertainment scenes and similar virtual content can be blocked (or permitted) in both environments. Virtual content can also be blocked or allowed based on content with similar characteristics. A user can, for example, choose to block social networking applications in an office setting (or to only allow content related to work). The wearable system will automatically block video games in the office based on the configuration that the user has provided. This is because both video games and social networking applications have recreational qualities.

Although these examples refer to muting of virtual content, the same techniques can be used for muting a component or components within the wearable system. Wearable systems can, for example, muffle the inward-facing image system to conserve hardware resources in an emergency (e.g. a fire). The mixed reality device can also selectively allow other virtual content to achieve the same result as blocking.

Examples 3D Display

A wearable system, also referred to as an AR system herein, can be configured to display 2D or 3D images to the user. Images can be frames from a video or still images. They may also be videos, combined with still images or other combinations. A wearable device can implement at least a part of the wearable system. This wearable device can display a VR or AR or MR environment for interaction. Wearable devices can be interchangeably used as AR devices (ARD). For the purposes of this disclosure, the term “AR” is used interchangeably with the term “MR?.” The term ‘MR?

FIG. FIG. 1A is an illustration of mixed reality with virtual objects and physical objects being viewed by the person. In FIG. In FIG. The user of MR technology perceives that he’sees’ these items in addition to the real-world platform 120. A robot statue 130 is standing on the real-world platform, and an avatar-like character 140 that appears to be personifying a bumblebee is flying by, even though they do not exist.

It may be beneficial for 3D displays to have an accommodative response that corresponds to the virtual depth of each point. The human eye can experience accommodation conflicts if the accommodative response of a display point doesn’t correspond to its virtual depth as determined by stereopsis and binocular depth cues. This could lead to unstable imaging, headaches, and even complete absence of surface depth.

FIG. 1B shows a person?s field of vision (FOV) as well as their field of regard (FOR). The FOV is a part of an environment that the user perceives at any given moment. This field of vision can change when the person moves, turns their head or adjusts their gaze or eyes.

The FOR is a part of the surrounding environment that can be perceived by the user through the wearable system. For a user using a head-mounted device, the field may include all or most of the four? The wearer’s field of regard can include the entire 4? In some contexts, a user’s movement may be restricted, and therefore the field of view may have a smaller solid angles. FIG. FIG. The central field will give a person an corresponding view of objects within a central area of the environmental view. The peripheral field of vision will give a person the same view of objects as the central view. What is central and peripheral in this case depends on the direction of the person’s gaze and their field of vision. The field of views 155 can include objects 121 and 122. In this example the central field of views 145 includes object 121 while the other object is located in the peripheral fields of view.

The field of view (FOV), 155, can contain multiple objects. Objects 121, 122). The size or optical properties of the AR system can determine the field of view. For example, the clear aperture size of a transparent window or lens on the head-mounted display, through which light is transmitted from the real world to the user’s eye. In some embodiments as the user’s pose 210 changes (e.g. head pose, eye pose, or body pose), the field view 155 and objects within it can also change. The wearable system described herein may include sensors, such as cameras, that monitor or photograph objects in the area of regard 165 and objects in the area of view 155. In certain embodiments, the system can alert the user to unnoticed events or objects that are occurring within their field of view and/or outside of the user’s view but still in the field of regard. The wearable system may also be able to distinguish what the user 210 is or not paying attention to.

The objects within the FOV and FOR can be either virtual or physical. Virtual objects can include operating system objects, such as an application that streams audio or video, an icon or menu, or a file manager. Virtual objects can also be objects within an application, such as avatars, virtual items in games, graphics, images, etc. Some virtual objects are both operating system objects and application objects. The wearable system adds virtual elements to existing physical objects that are viewed through transparent optics on the head mounted display. This allows the user to interact with the physical object. The wearable system could add a virtual screen to a room’s medical monitor, which would allow the user to adjust or turn on the medical imaging equipment. The head-mounted display can present virtual images to the wearer that are in addition to objects in the user’s environment.

FIG. The field of regard (FOR), shown in 1B, is a portion of an environment that can be perceived by a person by simply turning their head, or by re-directing their gaze. The central portion of a person 210’s field of vision 155 can be called the central field 145. The peripheral field of vision is the region that lies within the field view 155, but outside of the central field of the view 145. In FIG. In FIG.

In some embodiments objects 129 can be located outside of the user’s field of vision but still be able to be detected by a sensor on a wearable gadget (e.g. a camera), and the information related to the object 129 is displayed or used by the device. The objects 129, for example, may be hidden behind a wall or other obstruction in the user’s environment. This would prevent the user from being able to see the objects. The wearable device can include sensors that communicate with the objects.

Examples for a Display System”.

Display systems that provide images corresponding to a variety of depth planes to viewers can offer VR, AR, and MR experiences. Each depth plane may have different images. This allows the viewer to see depth cues by observing differences in image features or the accommodation required to bring them into focus. These depth cues, as discussed in the past, provide credible perceptions about depth.

FIG. The example 2 shows a wearable system that can be configured in order to create an AR/VR/MR environment. Wearable system 200 is also referred to by the term AR system 200. The wearable system includes a display, as well as various electronic and mechanical modules to support display 220. The frame 230 can be worn by the user, wearer or viewer 210. The display 220 may be placed in front of a user’s eyes 210. The display 220 may present AR/VR/MR contents to the user. The display 220 may be a head-mounted display (HMD), which is worn by the user. In some embodiments a speaker is mounted to the frame 230, and is placed adjacent to the user’s ear canal (in other embodiments another speaker is located adjacent to the opposite ear canal to allow for stereo/shapeable control of sound). The wearable system can include an audiosensor 232 (e.g. a microphone) to detect an audio stream and capture ambient sounds. In certain embodiments, other audio sensors (not shown) are positioned in order to receive stereo sound. Stereo sound reception is useful for determining the location of sound sources. The wearable system can recognize voice or speech on audio streams.

Click here to view the patent on Google Patents.