Metaverse – Christian Egeler, Xavier Hansen, Mohammad Raheel Khalid, Verizon Patent and Licensing Inc

Abstract for “Methods, systems and methods for customizing scenes for presentation to users”

“An excellent scene customization system detects the selection by an artificial reality device user of a customization scheme that will be applied to an immersive scene. Based on the application of the chosen customization scheme to an original immersive scene, the system creates a custom immersive setting. The custom immersive scene is generated by identifying an object of a specific type within the dynamic volumetric model. This object is then included in the original immersive scenes. The custom immersive scene is also generated by replacing the specific object within the dynamic volumetric modeling and in accordance to the applied customization schema with a custom item that corresponds to the particular type. The artificial reality device also displays the custom immersive scene to users.

Background for “Methods, systems and methods for customizing scenes for presentation to users”

“Advancements in computing and networking technology made it possible to create new forms of artificial reality (e.g. virtual reality or augmented reality, diminished reality or mixed reality). possible. Virtual reality technology, for example, may enable a user to experience an immersive and interactive virtual reality world. This environment is entirely different from the one that surrounds them in reality. The immersive virtual reality world can be viewed in any direction, including forward, backwards, left, right and up, as well as down. The user can feel as though he or she is in the immersive virtual reality world and are there. Another example is that augmented reality technology can allow users to experience an immersive and interactive augmented reality world. This environment may be based on or closely related to the user’s actual surroundings. An augmented reality experience may allow the user to perceive different aspects of the environment around them, but may also include virtual objects and/or other elements that may enhance the environment.

“Despite the many advances in artificial reality technology, many users still segregate artificial reality experiences and real life experiences that have not been altered or created artificially. Users may, for example, go to work each day and carry out their normal business without the use of artificial reality technology. However, they may then engage in artificial reality sessions (e.g., viewing a virtual reality media program or playing an augmented reality video game). Only after work is done and/or any other obligations have been fulfilled. As computing, networking and other technologies improve, personalized and enhanced artificial realities technologies will become more common in many people’s daily lives. There are many opportunities to make artificial reality technology more accessible to more people. This will allow them to be more productive, and give them more control over their environment. It also allows users to have more fun and create environments that are more appealing, diverse, practical, attractive, and/or enjoyable.

Herein are described methods and systems that dynamically customize a scene to present to a user. A?scene? is defined herein. A scene or an immersive scene is a scene. may refer to any real-world or artificially-generated (i.e., fully or partially virtual) environment or world in which a user may be located and/or which the user may experience. More specifically, a place or environment within the real world (i.e., as opposed to a virtual or artificially-generated world) in which the user may be located or desire to be located may be referred to herein as a ?real-world scene.? Real-world scenes can refer to any real world scenery, real locations, or real-world events (e.g. live events). Other places or environments that exist in reality, rather than only virtual. A real-world scene could include an indoor or outdoor location, such as a street, museum, scenic landscape, outer space, the surface of another planet or other real-world locations.

In some cases, a real world scene could refer to the scene that surrounds the user. In other instances, a real world scene might refer to the scene that surrounds the user. A real-world scene can be associated with an actual-world event like a basketball match, an Olympic event, or other sporting events. ), a concert (e.g. a concert in a large venue or a classical chamber concert held in an intimate setting, etc. ), A theatrical performance (e.g., a Broadway musical or an outdoor pageant). ), a large-scale celebration (e.g. New Year’s Eve in Times Square, Mardis Gras etc. ), a race (e.g. a stock-car race or a horse race). ), A political event (e.g. a presidential debate, a convention, etc. ), or any other real world event that might be of interest to potential users. The real-world scene can be used as a backdrop for a fictionalized scene, such as a virtual reality TV show or movie, or any other scene in any indoor or outdoor location that may be useful for a particular application.

“Although these immersive scenes may not be actual scenes, they may at least partially be artificially created. Some immersive scenes, for example, may be entirely virtual (e.g. computer-generated virtual environments used in videogames). Others immersive scenes can be based on real world scenes but may contain one or more virtual elements or objects that have been added to or replaced real objects in the real-world scene. As an example, custom immersive scenes can be dynamically created from original immersive scenes, such as real-world scenes, or artificially generated scenes. This is done by replacing the original objects in the original scenes with custom objects using the methods and systems described herein.

“As used herein an ‘object? (e.g. an original object or a custom object) Any living or inanimate object that is included in a scene may be included. As an example, in a scene that includes a conference room within an office setting, the objects may include chairs around the conference tables, people in the room, and their effects (e.g. clothing, papers and briefcases, coffee cups, etc. A floor, ceiling, doors, windows, and walls around the room are all possible.

“A scene customization system can dynamically modify a scene (e.g. customizing an original immersive scenes to dynamically generate an immersive scene) to present to a user in any manner described herein or as may be necessary to implement a particular implementation. The scene customization system could detect a user’s selection of a customization scheme that will be applied to an original immersive scenes during a scene customization session. This customization session can be enjoyed by the user using an artificial reality device. An immersive headset device, or any other device capable of providing an artificial reality experience (e.g., virtual reality, augmented reality, etc.) may be used to implement the artificial reality device. As described below,

“As used herein, a ?customization scheme? Any theme, concept or motif may be referred to. Any theme, concept, motif or filter that allows customization of an immersive scene. You can create customization schemes based on themes that relate to historical periods (e.g., a medieval European or 19th Century New York scheme), etc. ; time periods that are popular, nostalgic, or of other significance (e.g. a 50s diner plan, a 20s club program, etc. ; popular fiction (e.g., Star Trek USS Enterprise, Harry Potter Hogwarts, etc. ); particular aesthetics (e.g., a futuristic scheme, a fantasy castle scheme, a beach scheme, etc. ); Real-world locations (e.g., a Venice Italy plan, a scheme that is based on another planet, etc. ); and/or any other motif, theme, or idea that may be useful in a specific implementation. A customization scheme can be implemented in other ways. A customization scheme could be customized to a specific location, such as a user’s yard, home, or office. However, it may not be based on the same themes as those mentioned above. The customization scheme could include art, such as favorite paintings or sculptures. ), design (e.g. old English brick, brushed aluminium paneling, etc. ), and/or any other objects or elements that may not exist at the specific location but which the user wishes to see at the location during scene customization sessions.

“A scene customization session” is the term used herein. A scene customization session is a time when a specific customization scheme is applied to an immersive scene. Any length of scene customization session may be used to suit a specific implementation. A user may experience the real world through an immersive headset device. The device can be worn throughout the day so that the entire user’s house, commute, office, and other areas are affected by the customization scheme. This gives the user the feeling that they are living and working in Hogwarts or in medieval Europe. Another example is a scene customization session that lasts less than an hour. To provide a temporary distraction similar to traditional media content like movies, television, and/or games. Some scenes customization sessions can be linked in a sequence. In some cases, scene customization sessions may follow one another in sequence.

“Once the user has chosen a customization scheme, scene customization system can dynamically generate an immersive scene based upon the application of the selected customization schema to the original immersive scene. The scene customization system can dynamically create the custom immersive scene using data from the original immersive scene. The original immersive scene may include the real-world scene surrounding the user in the case of an augmented reality experience, a distinct real-world scene or another scene (e.g., an artificially-generated scene) in the case of a virtual reality experience, or any other original immersive scene as may serve a particular implementation. The scene customization system can, based on the data received, develop (e.g. create, update, and/or continuously update) a dynamic volumetric modeling of the original immersive scenes. The scene customization system can identify objects in the original immersive scenes and replace them with custom objects within the dynamic volumestric model. This is in accordance to the applied customization scheme. The artificial reality device may be used to present the scene customization system’s dynamically generated immersive scene to the user. Below are some examples of how dynamically creating and presenting an immersive scene during a scene customization session.

A scene customization system can provide many benefits by dynamically customizing scenes for presentation to users according to the methods and systems described herein. For example, users might feel less inclined than before to separate augmented reality experiences and real life experiences. Instead, they may be more open to integrating augmented reality into their lives to enhance their everyday experiences. One example is that users can personalize certain aspects of their lives in order to make them more attractive and/or more diverse. Another example is that instead of commuting on the same train or working in the exact same cubicle every day, users can customize their commutes and office settings to create the illusion that they are in a different place each day. This gives them the feeling that they are traveling through something more beautiful and interesting than what is actually happening in real life. Even though virtual reality experiences might not be integrated with the user’s daily life in the same way as augmented reality experiences, there are still benefits. Instead of watching a football match broadcast in virtual reality, such as the field and stadium, one can experience the game from the perspective of the actual scene. The football game actually takes place inside the user’s virtual reality experience. A user can experience the virtual reality football match as though the stadium is on top of a mountain with harrowing cliffs surrounding it, or any other customization that may be necessary for a specific implementation.

The systems and methods described in this document may provide more than just aesthetic, entertainment, or experience diversification benefits. They could also have social and/or functional benefits. Some customization schemes can include information, such as a newspaper or website. These elements can be used to replace or add to the original immersive scene in order to provide more information to the user. A table surface that the user uses for breakfast may be replaced with top news stories that the user can access during the meal. Dynamic scene customization, as described below, can facilitate and/or enhance social interactions when users share customization strategies and/or scene customization session, co-inhabit customized immersive scenes based upon shared customization schemes, or other such activities.

“Various embodiments will be described in greater detail using the figures. These disclosed methods and systems could provide some of the benefits previously mentioned, as well as other benefits.

“FIG. “FIG. This dynamically customizes scenes for presentation to users. System 100 could include, but is not limited to, a management facility 101, a custom immersive scene generation area 104, and an storage facility 106 that are selectively and communicatively connected to each other. Facilities 102 to 106 are clearly shown in FIG. 1 Facilities 102 to 106 can be combined, e.g. combined into one facility or split into multiple facilities as may be necessary for a particular implementation.

Management facility 102 can receive, generate, analyze and present data (e.g. data representative of user inputs, data stored at storage facility 106 or data generated by management facility 102). To facilitate dynamic customization of a scene to be presented to a user in whatever way is necessary to achieve a specific implementation. Management facility 102, for example, may detect the selection by a user of a customization scheme that will be applied to an original immersive setting during a scene customization session. This is done by receiving and analysing input from a user. Management facility 102 can also generate or receive data that represents an original immersive scene in which the customized scheme is to be applied, as appropriate for a particular implementation. Below are examples of data that management facility 102 can receive and generate representative of an original immersive scenario. The management facility 102 can also present to the users (e.g. via an artificial reality device like an immersive headset or mobile computing device) an original immersive scene created dynamically by custom immersive scene generator facility 104. Management facility 102 can also perform other operations that may be necessary to implement the systems and methods described in this document.

“Custom immersive scene generator facility 104 can generate an immersive scene based upon an original immersive scene in any manner described herein or as may be required for a specific implementation. A custom immersive scene generator facility 104 might apply a customized scheme (e.g. as detected by management facility 1012) to an original immersive sequence (e.g. generated by management facility 1012) to create the custom immersive scenes. This custom scene may then be transmitted back to management facility102 (e.g. via storage facility 106) for presentation. Custom immersive scene generator 104 can receive data from the original immersive scenes, create a dynamic volumetric map of that scene, identify and/or replace one or more of those objects with one or several custom objects, or any other way as may be necessary to generate the custom immersive scenario.

“In some cases, custom immersive scene generator facility 104 may perform the operations and/or other operations necessary to serve a specific implementation in real-time so that the user can be presented with an up-to-date and timely experience (e.g., one that correlates with the events that occur in the scene around the user). Although data processing and distribution can take some time, it is possible for users to have a customized immersive scene that is fully synchronized with the original immersive scenes upon which they were based. However, the operations described herein are considered to be ‘in real-time? When the operations are done immediately and without undue delays. A user can experience a custom immersive scenes in real-time even though they occur only a few minutes after the actual occurrences.

“Custom immersive scene generation facility (104) may be used to support real-time dynamic volumestric modeling and the experiencing of custom immersive scenarios based upon real-world immersive scenes. It may also include any configuration of hardware resources that may be required to provide processing for real-time creation of complex custom immersive stories based on real data representative of dynamic volumestric models of original immersive shots. In some cases, hardware resources could include multiple servers with multiple processing units. Other implementations may integrate hardware resources to dynamically generate the custom immersive scene into an artificial reality device like an immersive headset, mobile computing device or the like. Below are details about the custom immersive scene generation facility (104), including examples of operations mentioned above.

“Storage facility106 may contain and keep any data generated, managed and maintained, presented, or used in any other way by facilities 102 or 104 in a particular implementation. As an example, storage facility106 could include library data108. This may include data that is related to predefined or custom (e.g. user-defined) customizations, or preconfigured customizations that can be applied to original immersive scenes. An artificial reality provider may include, for example, a virtual reality content providers, an augmented reality content providers, etc. A library of customization options, such as the ones described herein, may be provided by an artificial reality provider. Some customization schemes can be modified or defined by the user. The library could include a basic or generic medieval customization scheme as well as a customized medieval customization scheme that the user has created or accessed without the provider’s permission

“Library Data 108 could include data that is associated with each customization scheme within the library. Each customization scheme could be associated with data that represents models of known objects or known object types. This information can be used to identify objects in original immersive scenes and determine if and how custom objects should replace them. Data representative of custom objects may be included in each customization scheme. These custom objects may replace the original objects in the custom immersive scenes. Library data 108, for example, may contain a model (e.g. a car with a specific make and year) and a model (e.g. a vehicle that matches the customization scheme). The model is used to replace the vehicle in the custom immersive scenes when it is identified by the custom immersive scenario generation facility 104. Library data 108 could also contain any data that may be relevant to a specific implementation. Library data 108 could include data about preconfigured customizations that can be applied to real-world scenes.

“As shown at FIG. Storage facility 106 may also contain original immersive scene information 110 and custom immersive data 112. Original immersive scene data 110 could include data representative for an original immersive scene. This data may be received from scene capture device (e.g. cameras, 3D (?3D?)). depth modeling devices, etc.) These devices may, for instance, be integrated into management facility 102. Scene capture devices, for example, may record data about the location and appearance of objects in an original immersive environment surrounding the user. This data may then be used (e.g. by custom immersive scene generator facility 104) to create a dynamic volumetric model that includes all the objects. Original immersive scene data 110 could include raw data from the original immersive scenes that were received. Data representative of the dynamic volumestric model of the original immersive scenes is also created (e.g., received, maintained, and updated) by the custom immersive scene generator facility 104. Original immersive scene data 110 could also include locational coordinates, and/or any other data related to real-world scenes.

“Similarly, custom immersive scenes data 112 could include data that represents the custom immersive scenario created by applying a customization scheme or one or more preconfigured customizations. Custom immersive scene data 112 could include data that can be used to replace the original objects with custom ones within the dynamic volumetric model.

“FIG. 2. This diagram illustrates an example configuration 200, in which system 100 dynamically creates an original immersive scene to present to an exemplary user. Configuration 200 contains an artificial reality device (202), which is embedded in a real-world scene (204). Artificial reality device (202) is linked to (e.g. being used by) another user 206 located within real-world scenario 204. Artificial reality device 200 is also communicatively connected, via a network 208, with an artificial reality provider service server 210 (?providerserver 210). ), which can be distant from artificial reality device202 and user206 (e.g. in a different location from real-world scene 204).

“Configuration 200 could include an example implementation of system 100, which may dynamically create an immersive scene for presentation to the user 206. System 100, for example, may be implemented using hardware and/or code within artificial reality devices 202, 210 hardware and/or 210 software, and/or distributed across both provider server 220 and artificial reality device 223. It also communicates via network 208

“System 100 may be implemented in configuration 200 to dynamically create an immersive scene for presentation to the user 206 as an enhanced reality experience. As shown, the original immersive scene can include real-world scenes 204 around the user during a scene customization session. System 100 in configuration 200 may be able to receive data representative for the original immersive scene. This could include, for instance, directly recording data from real-world scene 244 during the scene customization session. Artificial reality device 202 could include hardware components, such as one or more cameras, 3D depth modelling equipment, and the like. It can be used to capture data representative of real world scene 204. For example, it may detect light reflections from objects on real-world scenes 204. As will be explained below, some examples show how data representative of real world scene 204 around the user (i.e. the data upon which creating and continually updating the dynamic volumetric models may be based) can be captured solely by one or more artificial reality devices 202 being used 206 by the user to experience a scene customization.

“Based on directly captured data (e.g. video captured by an artificial reality device 202, etc. System 100 may create a dynamic volumetric model from the original immersive scene in user 206 (i.e. real-world scene number 204). System 100, for example, may continuously update and create a dynamic volumetric model 204 of the real-world scene surrounding user 206. This data representative of real world scene 204 is captured by artificial reality device 200 throughout the scene customization session.

“Once system 100 develops or begins developing (i.e., creating, updating, maintaining, etc.) System 100 can identify and replace objects in the dynamic volumetric modeling of a real-world scene (204), to create a custom immersive environment. System 100 can also present user 206 the custom immersive scene. This will be explained and illustrated further below. FIG. Now, we will describe each of the elements shown in FIG.

“Artificial Reality Device 202” may include any number of devices that user 206 uses to access and experience an immersive scene created by system 100. Artificial reality device 202 can be used to present a custom immersive scene. Artificial reality device 202 could include an immersive headset device. This includes a head mounted virtual reality device, head-mounted augmented reality, virtual reality gaming devices, and so on. ), a personal computer device (e.g., a desktop computer, laptop computer, etc. ), A mobile or wireless device (e.g. a smartphone, tablet device, mobile reader, etc. Any other device or combination of devices that can be used to present a customized immersive scene. Below are examples of different types of artificial reality devices which may be used to implement artificial reality device 022. Different types of artificial realities devices can provide different experiences or levels of immersion for users 202, as will be explained.

“In configuration 200 real-world scenario 204 could represent any of the real-world scenes described herein, or any other suitable real world scene. Real-world scene (204) may be the environment or scene in which user 206 and artificial reality device 200 are located. Real-world scene 200 may change depending on how artificial reality device 202/user 206 move around the world. Real-world scene 204 could be user 206’s office while user 206 works, or the train station and/or real-world environment surrounding the train, as user 206 commutes home from work. User 206 may also live in the home of user 206 when user 206 returns home. Configuration 200 may contain or constitute real-world scene 204, which could include the original immersive scene on which system 100 is based a dynamically generated immersive scene. Real-world scene 204 may only include a single room where user 206 is situated. Other examples may show real-world scene 204, which may include a large office building where user 206 might work, large outdoor areas in which user 226 is located, and so forth .).”

“Network208” may contain any provider-specific network (e.g. a cable or satellite carrier networks or a mobile phone network), the Internet, wide-area network or any other suitable network. Any communication technology, device, media, or protocol may be used to transfer data between provider 210 and artificial reality device 202.2. Artificial reality device 202 may communicate with provider server 210 using any appropriate communication technologies, devices media, and/or protocols supporting data communications. This includes socket connections, Ethernet and data bus technologies, communication devices, Transmission Control Protocol, (?TCP?) and other devices. ), Internet Protocol?IP? ), Internet Protocol (?IP? ), Telnet, Hypertext Transfer Protocol (?HTTP? ), HTTPS. Session Initiation Protocol, (?SIP) ), Simple Object Access Protocol, (?SOAP) ), Simple Object Access Protocol (?SOAP) ?XML?????????????????????????????????RTP??????????????????????????????XML??XML??????XML??XML??XML??XML???????RTP?RTP?RTP?RTP?RTP?RTP?RTP?RTP?RTP?RTP?RTT?RTP?RTP?RTP?RTP? ), User Datagram Protocol? (?UDP?) ), Global System for Mobile Communications?GSM? technologies, Code Division Multiple Access(?CDMA?) technologies, Code Division Multiple Access (?CDMA?) ), 4G Long Term evolution (?LTE) ), Voice over IP (?VoIP? ), Voice over LTE (?VoLTE? WiMax, Time Division Multiple Access?TDMA? technologies, Short Message Services (?SMS) ), Multimedia Message Service? (?MMS?) ), radio frequency (?RF?) radio frequency (?RF?) signaling technologies, wireless communication technologies (e.g., Bluetooth, Wi-Fi etc. ), and/or other suitable communications technology. Although only one network 208 was shown to connect artificial reality device 200 and provider server 220 in configuration 200, it is recognized that artificial reality devices 202 and 210 could communicate via multiple interconnected wired and wireless networks. This may be useful for a specific implementation.

“Provider server210 may contain one or more servers, or other computing devices, associated with (e.g. provided and/or managed) an artificial reality content service provider (e.g. a network provider, cable provider, satellite provider, Internet service provider or provider of artificial reality mobile apps, etc.). Provider server 210, in some cases, could be part of system 100. It may, however, perform or facilitate artificial truth device 202 in performing any operations described herein. Provider server 210, for example, may communicate with artificial reality devices 202 to obtain data representative of real-world scenes 204. It may also include processing resources that allow the development or facilitation of dynamic volumetric models based on real world scene 204. The provider server 210 could also contain processing resources that are used to identify and substitute objects in the dynamic volumetric model, and/or perform other operations to dynamically generate the immersive scene for user 206 via artificial reality device.202. Other examples show that provider server 210 could be distinct from (e.g. and communicatively coupled with) system 100. This may allow system 100 to dynamically customize a scene for presentation 206 to user 206 in any manner that may serve a particular implementation. Another example is provider server 210, which may implement system 100 completely and present the custom immersive scenes to user 206. It may do this by providing data representative to the custom immersive scenes to artificial reality device 200 for presentation to user.

Provider server 210 can store, provide and/or facilitate the creation or modification of customizations that could be applied to real-world scenes 204. Provider server 210, for example, may store predefined (e.g. provider-defined) and/or user defined customization schemes along with data associated to the customization schemes described herein (e.g. models of known objects or custom objects). The provider server 210 can also be used to facilitate the selection, access, modification, creation, and other interactions with customization schemes (e.g. via an interface on artificial reality device 202.

“FIG. “FIG. Configuration 300 could have some similarities to configuration 200, such as the inclusion of certain elements from configuration 200. As an example, configuration 300 contains artificial reality device 200, which is included in real-world scene.204. As with configuration 200 artificial reality device202 is associated to user 206, who is also located in real-world scene204. Artificial reality device 200 is also communicatively connected, via network 208 with provider server210. This may be in remote locations from artificial reality devices 202 and 206 (e.g., a different location from real-world scenario 204).

“Like configuration 200 and configuration 300, configuration 300 may include an example implementation of system 100 which may dynamically adjust an immersive scene to present to the user 206. System 100, for example, may be implemented using hardware and/or code within artificial reality devices 202, 210 hardware and/or 210 software, and/or distributed across both provider server 220 and artificial reality device 200 and that communicates over network 208. In some embodiments, system 100 implementation may include or be communicatively linked with camera 302 (e.g. via network 208)

“Contrary to configuration 200, where the implementation system 100 may be configured for user 206 to present an augmented reality experience to him/her, system 100 shown configuration 300 may be used to dynamically create an immersive scene to be presented to user 206 in virtual reality. As shown, the original immersive scene can include, or may be the same as, real-world scenes 304. This may differ from real-world scenes 204 around the user during a scene customization session. User 206 might experience an immersive scene that is different from the real-world scene 200. Real-world scene 200 may be associated to user 206’s home or work place, but real-world scene304 could be associated with an actual event (e.g. a concert, sporting event, etc.). ), A fictionalized program (e.g. a virtual reality TV show, etc. ), an exotic place that is difficult or costly to reach (e.g., another country, an underwater expedition, etc. 206), or another real-world scene that might be of interest to the user. In some examples, the virtual reality experience provided by configuration 300 may include an artificially-generated scene in place of real-world scene 304, such that camera 302 may not be employed.”

System 100 may be able to receive data representative for the original immersive scene (real-world scene 304 instead of real-world scenes 204) and can then develop the dynamic volumetric model differently in the virtual reality example 300 than the one described for the augmented-reality example 200. Instead of receiving data representative for the original immersive scene, such as by directly capturing real-world scenes 304, which may prove impossible, system 100 could receive data from a scene modeling software that has elements located in the vicinity of real world scene 304. This would capture data representative to real-world scenes 304.

“For example, scene modeling systems may include camera 302, and possibly one or more cameras or other equipment (e.g. hardware, software, etc.). For capturing real-world scene data 304 and/or modeling real-world scenes 304. The scene modeling system, including camera 302, may be situated in the vicinity (e.g. within) real world scene 304. Components of the scene model system, such as camera 302, may capture and transmit data representative of real scene 304 via network 208, to be received by system 100 (e.g. by provider server 220 and/or artificial reality device 202). The implementation of system 100 may create a dynamic volumetric model for the original immersive scene (i.e. real-world scene 304) in a different way than system 100. System 100 in configuration 300 could receive data that is representative of the dynamic model of real-world scene 304, rather than continuously updating and creating the dynamic volumetric models. This may be done by receiving data from the scene modeling software. The scene modeling system may create data that represents the dynamic volumetric model for real-world scene 306 (e.g., using data directly captured by camera 302 which is representative of real world scene 304). System 100 can also continuously update the dynamic volumetric model for real-world scene304 based upon the received data transmitted to it by the scene modelling system. (e.g., camera 302 continues to capture data that is representative of real world scene 304)

“As in the configuration 200, once system 100 in configuration 300 is implemented or starts developing (i.e. receiving, updating and maintaining, etc.), then configuration 300 becomes operational. System 100 can identify and replace objects in the dynamic volumetric modeling of the real-world scene. This will create a custom immersive scene. System 100 can also present user 206 the custom immersive scene via artificial reality device202. In some cases, the artificial reality device 202 and real-world scene 200, network 208 and provider server 210 may also perform similar functions in configuration 300. These functions may be used to implement virtual reality functionality in configuration 300. Artificial reality device 202 could be another type of artificial reality devices (e.g. an artificial reality device that is more suited to providing virtual reality experiences than augmented reality ones). Provider server 210 may also perform additional functionality in order to implement the virtual reality experience provided under configuration 300. This may not be possible for the augmented reality experience provided under configuration 200. Provider server 210 may be used in conjunction with the above-described scene modeling system (i.e. camera 302 is used to capture data representative for real-world scenes 304) to create, distribute, and capture the dynamic volumetric model real-world scene.304.

The scene modeling system with camera 302 can capture real-world scene data 304 and create dynamic volumetric models of real-world scenes 304 in any way that is suitable. The scene modeling system could include one or more cameras (i.e. camera 302) or other equipment that is located within or around real world scene 304. It may also be configured to capture data representative objects within real-world scenes 304 (e.g. depth data, texture and/or any other data used in 3D modeling). The scene modeling system can capture data and/or create dynamic volumetric models of real-world scene304 in certain instances. This is in accordance with U.S. Patent Application Ser. No. No. 15/141 707, filed April. 28.2016, and entitled METHODS & SYSTEMS TO CREATE AND PROVIDING VOLUMETRIC REPRESENTATIONS OF REAL-WORLD EVENTS and/or copending U.S. Patent Application Ser. No. No. 15/141 717, filed April. 28, 2016, and entitled METHODS AND SYSTEMS FOR CREATING AND MANIPULATING AN INDIVIDUALLY-MANIPULABLE VOLUMETRIC MODEL OF AN OBJECT. Both of these applications are hereby included by reference in their entirety.

“FIG. 4. illustrates an example artificial reality experience 400 where user 206 is presented an exemplary field view 402 of an immersive scene 404. Experience 400 could be, for example, an augmented reality experience. It may be provided by system 100, such as the configuration 200 in FIG. 2), or a virtual reality experience (e.g. provided by an implementation system 100 like that shown in configuration 300 of FIG. 3), or any other suitable type of experience whereby an immersive scene (e.g., an original or a custom immersive scene based on a real-world or artificially-generated scene) is presented to user 206 by way of an artificial reality device.”

“User206 can experience immersive scene 402 by giving user input to dynamically alter field of view 402 in order to display any content from immersive scene 402. User 206 might indicate that he/she wants to view a part of immersive scene 402 that isn’t currently in the field of view 402. An artificial reality device like a personal computer or mobile computing device may use user input to indicate that they wish to view a part of immersive scene 404. This input could include mouse movements, input from a keyboard navigation key, swipe gestures, and the like. An artificial reality device that incorporates particular sensors (e.g. motion sensors, direction sensors, orientation sensors, etc. An immersive headset device may require that the user input includes a change in the orientation of the artificial reality device’s display screen with respect to at most one of two orthogonal directions. The artificial reality device might be able to detect changes in the orientation of the screen relative to an x, y, and z axes. The artificial reality device that user 206 uses to experience artificial reality experience 406 may detect a change in the orientation of a display screen. This dynamic changing of content involves gradually replacing one immersive scene 404 part with another immersive scene 404 part as user 206 views it.

“To illustrate, FIG. FIG. 4 illustrates that immersive scene404 could be a real-world scene which includes a conference room in an workplace setting. User 206 may input user input to the artificial-reality device through which user 206 is experiencing immersive scenes 404 (e.g. artificial reality device 202) in order to indicate that user 206 wants to see to the left of the current field of view 402. User 206 might press the left navigation key on a keyboard and perform a swipe gesture towards the right. Or, user 206 could rotate his or her head left to change the orientation of the display screen in relation to a y axis while wearing an immersive headset. The objects in immersive scene 404 will be displayed as such (e.g. the conference table, chairs, windows, etc.). To give user 206 the sensation that he/she is looking to the left in immersive scenario 404, scroll to the right across the field of view 402. Immersive scene objects scroll to the right side of field view 402, allowing for new objects (e.g. objects not shown in FIG. The left side of field view 402 may be scrolled to 4) easily. User 206 can provide input to allow field of view 402 display any immersive scene 404 the user desires.

In certain cases, the field of view 402 can be displayed entirely on an opaque screen of an artificial-reality device. This could include a typical display screen from a tablet, mobile computing device, or laptop. For instance, if system 100 is presenting a virtual reality experience to user 206, a field of view of an immersive scene associated with a real-world scene distinct from the real-world scene surrounding user 206 or an artificially-generated scene may fill an entirety of the opaque display screen of the artificial reality device. A pass-through camera, such as one positioned behind an opaque screen of an artificial reality device, may also be used to present an augmented reality experience to user 100.

“In some other cases, field of view 402 can be displayed on a screen that’s (or may be made transparent), such as a screen of certain artificial realities devices specifically designed for augmented reality-type apps. If system 100 presents an augmented reality experience for user 206, some objects, such as objects in original immersive scenes, custom objects in custom immersive scenes, etc. may be visible to the user. User 206 may see the objects directly due to direct light passing through the partially transparent screen. Other objects (e.g. virtual objects, custom objects added to an immersive scene) may also be visible at the same time. You may see the objects in immersive scene 404 at different locations, such as projected onto the partially transparent screen or projected onto user 206’s retina (e.g. using a laser waveguide) and/or any other method that may be used to support a particular implementation.

“Different types artificial reality devices could provide different experiences to user 206 by presenting immersive scene 402 in different ways or receiving input from user. To illustrate, FIG. FIG. 5 shows various examples of artificial reality devices 500 that can be used to enable user 206 to experience immersive scene 404. Artificial reality device 202 can be implemented using any of the artificial reality gadgets 500, or any other suitable artificial reality device that operates on similar principles to artificial reality technology 500.

“An immersive headset device 502 may be used as an example of an artificial-reality device. It is a head-mounted virtual and augmented reality device. Each user’s eyes can see a separate display screen 504 within the immersive headset device 502. One display screen 504 can be shared by both eyes in some cases. For example, a single, partially-transparent display screen (e.g., an augmented reality visor) may be used. As shown in other examples, different display screens 504 (e.g. opaque or partially transparent screen screens) may be used within the immersive headset device 502 to display slightly different versions 402 of field view 402 (e.g. stereoscopic versions 402 that may have been captured by one or several stereoscopic cameras). This will give user 206 the illusion that immersive scene 402 is three-dimensional. Display screens 504 can be set up to fill user’s peripheral vision, regardless of whether they are partially transparent or opaque. This will give user 206 a feeling of realness.

Immersive headset device 502 could also include motion sensors (e.g. accelerometers), direction sensors (e.g. magnetometers), orientation sensor (e.g. gyroscopes) and/or any other suitable sensors to detect natural movements of user 206 during immersive scene 404. User 206 might provide input to indicate a desire for user 206 to move the field of view 402 by a specific amount in immersive scene. This could be done by turning one’s head in that direction, and then by the same amount. Immersive headset device 502 may offer user 206 a natural, hands-free experience. It does not require any console control and may allow for the most immersive artificial realities experience.

“A personal computer device 506 is another example of an artificial-reality device (e.g., desktop computer, laptop computer, etc.). User 206 may use a display screen 508 (e.g. a monitor) to view immersive scene 404. Display screen 508 may not offer the same immersive experience as immersive headset 502, and may not be able to provide a clear stereoscopic view of each user’s eyes. Personal computer device 506 might have other benefits, such as its widespread acceptance among casual users of artificial reality who may not be inclined or able to buy an immersive headset device. Personal computer device 506 can allow users to experience immersive scenes 404 (e.g. virtual reality content) using a standard web browser. The keyboard 510 may be used by user 206 to input data to personal computer device 506. This could include using the navigation keys on keyboard510 to move the field of view 402, and/or using a mouse512 to move the field of view 402. In some cases, combination keyboard 510/mouse 512 may be used for user input. For example, you could move field of view 402 using navigation keys on keyboard510. Or click or interact with objects in immersive scene 404 via mouse 512.

“Another example of an artificial-reality device is a mobile computing device 514 (e.g. a smartphone, tablet computer, or mobile reading device). User 206 may use a display screen 516 to view immersive scene 404. The mobile computing device 514 might combine the best of both immersive headsets and personal computers to offer the most versatile artificial reality device for experiencing immersive scenes 404. Mobile devices, just like personal computers, are very ubiquitous and can be used to access many more people than immersive headsets. Mobile devices can be set up to offer user 206 an immersive experience similar to immersive headsets, as many of them are equipped with motion sensors and directional sensors. For example, mobile computing device 514 may be configured to divide display screen 516 into two versions (e.g., stereoscopic versions) of field of view 402 and to fill the peripheral vision of user 206 when mobile computing device 514 is mounted to the head of user 206 using a relatively inexpensive and commercially-available mounting apparatus (e.g., a cardboard apparatus). Mobile computing device 514 can be used to enable immersive scene 404 viewing by receiving user input at arm’s reach (i.e. it is not attached to the user’s head but acts as a dynamic window that allows users to view immersive scene 404 from a handheld display), or other techniques that may be useful in a particular embodiment.

“Examples of artificial reality devices have been shown, but these examples are only examples and should not be taken as a guide. Any suitable device or combination of devices that can be used to present custom immersive scenes in accordance with the principles herein may be considered an artificial reality device. An artificial reality device could include either a tethered configuration (e.g. a tethered headset) or an untethered configuration (e.g. a display screen that is not connected to a processor). An immersive headset device, or another artificial reality device, may be used with a controller, such as a ring controller and/or handheld controller.

“An extended example will be given to show how system 100 can dynamically personalize a scene for presentation. This could include receiving data representative for a particular immersive scene, creating a dynamic volumetric map of that scene based on the data and then identifying and replacing objects in the dynamic volumetric modeling. This example will be discussed with regard to FIGS. 6-9B.”

“FIG. “FIG. 6″ illustrates various objects in an exemplary immersive scene 600. This scene can be dynamically customized using system 100. Certain objects are described and labeled in relation to FIG. FIGS. 6 also include certain objects labeled and described in relation to FIG. 7A-9B. It is important to note that FIG. FIG. 6 may not be labeled explicitly in FIGS. 7A-9B to reduce clutter and make the descriptions of the other figures more clear. All objects in original immersive scene 600 will be referenced in the description of any FIGS. 6-9B are illustrated and labeled in relation to FIG. 6.”

“Specifically, FIG. Original immersive scene 600 could be a real-world scene set in an office setting. FIG. 6 shows the original immersive scene 600, which includes a conference room 602 as well as a hallway 604, as illustrated in FIG. 6. A top view, cutaway. Due to doors that could be closed, etc., only items that are within conference room 602 or hallway 604 can be included (e.g. descriptions, representations within dynamic volumetric models etc.). FIGS. 6-9B. 6-9B. 6. However, it will be clear that similar principles can be used to extend dynamic volumetric modeling and to dynamically customize scenes for presentation to users when they move from conference room 602 to hallway 604 to new immersive scenes (e.g. to other rooms in the office setting to the outside world, etc

“The original immersive scene 600 objects could fall under one of the following three categories, depending on whether or not they are?known?. to system 100 (e.g. identifiable, using object recognition techniques, and similar by system 100).

Original immersive scene 600, which is a first-class object category, may contain known objects 606 to which system 100 can have access. System 100, for example, may have access (e.g. within storage facility 106) to data like 3D models, pictures and/or physical description data. This will allow system 100 to identify known objects 606 with particularity. As we will discuss below. As an example, known objects 606 could include a specific conference table 606-1 (e.g. a conference table with a particular brand name and model, for which system100 may have access to data), certain office chairs 606-2 (i.e. office chairs 606-2-1 through 600-2-8, which may all belong to the same brand and model, for which system100 may access information), beverages 606-3 (e.g. one or more cans soda for which 100 may have access to data), and a water fountain 606-4 (e.

“Introduction scene 600 could include unknown objects 608 from unknown types. System 100 may not have access the predefined data 608 about certain unknown objects and may not be able to identify unknown objects 608 based upon a known type of object (in contrast to the third category below). System 100 may not be able to identify unknown objects 608 and may not replace them with custom objects in dynamic volumetric models. System 100 may also have access to data about unknown objects 608 or may be able to identify unknown objects 608 using a known type of object. However, system 100 may not replace unknown objects 608 by custom objects due either to the existence of unknown objects 608 in a predefined, non-augmentable list, or because a particular customization plan does not include a custom item suitable for replacing unknown objects. FIG. FIG. 6: Unknown objects 608 in original immersive scene 600 could include a supply count 608-1 (e.g. a counter on which beverages 606-3 can be made available, which may also contain office supplies for meeting attendees in conference room 602) and a mail cart 608-2 (e.g. used by aides throughout the office place to transport items) and a plant 608-3.

Original immersive scene 600 could include unknown objects of known types. These objects are referred to as “recognized objects” 610. System 100 may not be able to access predefined data, so system 100 might not be able to identify recognized objects 606. This is in contrast to known objects 606 and allows system 100 to identify objects 610 with particularity (e.g. based on the brand or model of the object). System 100, however, may be able to identify recognized objects (610) based on known objects types that have characteristics for which system 100 has access to data. System 100 may be able to identify objects based on basic characteristics, such as location, orientation, size, color, texture and/or any other suitable characteristics. System 100 can identify recognized objects, 610, so system 100 could replace recognized objects, 610 in the dynamic volumetric model. This is even though recognized objects 610 might not be known or identified with particularity. FIG. FIG. 6 shows that recognized objects 610 may be found within original immersive scene 600. These include doors 610-1 (i.e. door 610-1-1 in conference room 602 or hallway 604 and other doors 610-1 through 610-1), flooring 610-2 (i.e. flooring 610-2 of conference room 602 as well as flooring 610-2 of hallway 604), various wall 610-3 (i.e. flooring 610-2 of conference room 602 with flooring 610-2 of hallway 604), and windows 610-4-3 in conference room 602 respectively and windows 610-4-2 in hallway 604 and windows 610-4-3 and 610-4-2 of the hallway 602 and windows 610-4-2 of hallway 602 and windows 610-4-3 and 610-4-3 from hallway 602 (both of conference room 602 610-4-2 of conference 602 and 610-4-2 of hallway 604)

FIG. illustrates how system 100 can dynamically personalize original immersive scene 600 to present to a user. FIG. 7A shows exemplary elements of a dynamic volumetric model 700 for original immersive scene 600. FIG. 7B shows exemplary elements of an original immersive scene 700 dynamically generated from dynamic volumetric model 700.

“A user 704 might be found in conference room 602, with the door to that conference room (i.e. door 610-1-1-1) closed. User 704 might want to customize the scene using an artificial reality device (not shown). User 704 might want to personalize the scene around user 704 using an augmented reality experience that is based on a specific customization scheme, such as a medieval customization plan. User 704 could choose the medieval customization plan (e.g. using an interface provided by user 704) to start a scene customization session where the medieval customization schema is to be applied in the original immersive scene 600.

“System 100 (e.g.) may be implemented either by an artificial reality device or another device such a provider server, both of which are described in the context of FIG. 2) may detect user 704’s selection of the medieval customization session. System 100 may respond by initiating a scene customization session. This session will apply the medieval customization scheme to original immersive scene 600 in order to dynamically create custom immersive scene 702. The scene customization session may also present the user with the artificial reality device during the scene customization session. System 100 might receive data from original immersive scenes 600 and 600. It may then develop a dynamic volumetric model 700 of the original immersive scenes 600. Then, it could identify within dynamic volumetric models 700 one or more of the objects in original immersive scenes 600 (e.g. known objects 606, recognized object 610, etc.). ”

“System 100 may be able to receive data from original immersive scene 600, and may develop dynamic volumetric model 700 using the data. This may be in any way that may benefit a particular implementation. As will be shown in greater detail below, system 100 could use sensors (e.g. depth sensors, cameras, etc.). User 704 may use the artificial reality device to scan or otherwise capture data about objects in original immersive scene 600. System 100 might receive data such as three-dimensional positional data and texture data. For the different objects. System 100 may use the received data to map a 3D depthmap (similar to a wireframe) of the objects. It may also overlay texture data onto the 3Ddepth map to create 3D volumetric models. FIG. FIG. 7A shows that system 100 can create dynamic volumetric models 700 of the original immersive scene 600. This could include conference tables 606-1 and 606-2, as well as supply counter 608-1, floor 610-2, and other objects .

“Because door610-1-1 is closed, system 100 may not be able to detect hallway 604 data to start integrating objects in hallway 604 into dynamic volumetric modeling 700. System 100 may start to model hallway 604 once door 610-1-1 opens (e.g. so that user 704 can see out into hallway 604). Or system 100 may have already modeled hallway 604 or part of it based upon previous modeling of hallway 604. Below is additional information about how dynamic volumetric modeling 700 might be created. 8A-9B.”

“System 100 may identify one of the objects in original immersive scene 600 (e.g. one or more of recognized objects 606, 610, etc.). Any way that may be useful for a particular application. System 100, for example, may identify one or more features in an object that is included in original immersive scene 600 based on data received from the artificial reality device associated to user 704 or incorporated into dynamic volumetric model 700. System 100, for example, may identify one or several predefined markers, unique features, such as styles, corners and sizes or proportions. Conference table 606-1. System 100 can match one or more of the features of the object (e.g. conference table 606-1), with one or several corresponding features of a well-known object (e.g. the brand and model for conference table 606-1), documented in an object library that is associated with the chosen customization scheme. System 100 could determine that the specific object is an instance the known object in the object library based on the match of the one or more of its features. System 100 could determine, for example, that conference table 606-1 has a specific brand and model. to system 100

System 100 can also identify, using the original immersive scene 600 data, one or more of the features of an object, such as flooring 610-2-1. System 100 might identify, for example, flooring 610-2-1’s carpeted texture, large floor area (i.e. covering entire room), floor position relative to other objects (i.e. running horizontally beneath other objects), or other features. System 100 might conclude that the features of flooring 610-2-1 are characteristics of a particular object. This could be based on the object library associated to the chosen customization scheme, such as flooring object types. System 100 could determine that flooring 610-2 is of the flooring type based on the fact that it has one or more characteristics that are typical of that type of flooring. System 100 could also recognize the flooring 610-2-1. “Flooring 610-2-1 is the floor in conference room 602.”

“System 100 may, in some cases, perform object recognition operations to automatically identify known objects (e.g. conference table 606-1 and/or recognized objects (e.g. flooring 610-2-1)” However, in other cases, human assistance may be required (e.g. from user 704, or a technician who is responsible for modeling office space, etc.). This may be used to aid object identification in a specific dynamic volumetric modeling such as dynamic volumetric modelling 700.

FIG. 7B shows the “Custom immersive scene 702” (represented in FIG. 7B may be dynamically generated using dynamic volumetric model 700 in whatever way is necessary for a specific implementation. System 100, for example, may replace any of the objects in dynamic volumetric modeling 700 by one or more custom objects, depending on the chosen customization scheme (e.g. the medieval customization scheme). System 100 can replace original objects with custom items in any manner that is most appropriate for a given implementation. The replacement of original objects by custom objects can be done in many ways, as described in the co-pending U.S. Patent Application Ser. No. No. 15/141/717, whose contents were incorporated herein by reference and/or any other way. In some cases, custom objects can be used to replace original objects.

“Replacing a known object could, for example, mean replacing an instance of the object to be replaced with a custom object that is predetermined within the chosen customization scheme to match the object to which it is to be replaced. One example is conference table 606-1. This conference table, as shown in the above example, may be a known object. System 100 could have identified it to be an instance a particular brand and model of conference tables for which a custom object has been assigned under the medieval customization scheme. This conference table model and brand may be matched, in the customization scheme. For example, a rugged, long-handled table made of rusted iron cross strapping and rough wooden slats could appear to be made. System 100 may replace conference table 606-1 as part of dynamic generation of custom immersive scenes 702 with the rugged custom table created in the medieval customization plan. Cross-hatched shading illustrates various objects, such as conference table 606-1 and office chairs 606-2, beverages 606-3, and others. In accordance with the medieval customization plan, specific objects may be substituted in original immersive scene 600. For example, the table can be replaced with a rugged conference table 606-1 by office chairs 606-2. Additionally, beverages 606-4 can be replaced with rustic goblets that contain different colors of liquids.

“Another example is replacing an unknown object (e.g. one of recognized objects 601) with a custom object that corresponds to the known type. One example is flooring 610-2-1. System 100 might have identified it as being of the flooring type. A custom object has been assigned to the system within the medieval customization program. In the customization scheme, flooring objects may be matched with large stone-like tiles. System 100 can, therefore, replace the flooring 610-2-1 by the stone flooring as part of dynamic generation of custom immersive scenes 702. As shown in custom immersive scene 702, vertically lined shading highlights various objects, such as flooring 610-2-1, doors 610-1-1 and windows 610-4-1, 610-4-2, walls, 610-3, and others. In the original immersive scene 600, custom objects may be substituted according to the medieval customization scheme. In addition to the replacement of flooring 610-2-1 with stone flooring, wall 610-3 (not shown) can be replaced with stone walls. Door 610-1-1 could be replaced with a rugged door made of wood slats, iron cross strapping and windows 610-4-1 or 610-4-2 might be replaced.

“As stated above, certain unidentified objects (e.g. objects 608) in original immersive scene 600 cannot be replaced during dynamic generation of custom immersive scenes 702. Some objects, such as the supply counter 608-1 in original immersive scene 600, may not be recognized by system 100.

“In some cases, system 100 might identify (e.g. within dynamic volumetric modeling 700 of original immersive scene 600), one or more?nonaugmentable objects?” Original immersive scene 600. One or more of the non-augmentable items may correspond to predefined entries on a list of non-augmentable items associated with the chosen customization scheme. It may be desirable, for example, based upon user 704, safety regulations, privacy concerns, or other considerations. Certain objects cannot be customized, replaced, augmented or modified. For example, traffic lights and/or other indicators may not be compatible with a medieval customization scheme. However, they may still be safe to use. System 100 may not replace the non-augmentable objects by the custom objects.

“System 100 could identify one or more of the recognized or known objects in original immersive scene 600. However, the recognized or known objects may not match any custom objects in the customization scheme. The known or recognized objects can be treated as unknown objects and cannot be replaced in custom immersive scenes 702.

FIG. 7B: Unknown objects and other objects that are not replaced in custom immersive scene 702 will be illustrated without shading to indicate that they remain unchanged (e.g. supply counter 608-1). System 100 can perform a complete scene replacement. This means that every object, or any known or recognized object, is replaced within the custom immersive scene 702. This ensures an immersive experience. System 100, on the other hand, may only perform a partial replacement of an immersive scene. This allows for a more natural and authentic artificial reality experience.

In certain cases, an original immersive scene could be modelled by another system than system 100. In the case of virtual reality experiences, the original immersive scene could be modelled by a scene modeling software and sent to system 100. Or, the original immersive scene might be artificially generated instead of being a real-world scene. In augmented reality applications, the original immersive scene can be pre-scanned, and a volumetric model may be generated before a scene customization session begins.

Dynamic volumetric model 700 can be created incrementally or in real-time as sensors (e.g. sensors of user 704) capture new regions of original immersive scenes 600 that have never been previously captured and/or integrated into dynamic volumestric model 700. In certain cases, data representing a real-world scene around a user (e.g. original immersive scene 600 surrounding user 704) may be captured in real-time exclusively using one or more sensors from the artificial reality device. This allows the user to customize the scene.

“System 100 can incrementally create dynamic volumetric model 700 in any way that suits them. System 100, for example, may use a simultaneous mapping and localization algorithm to block out parts of the original immersive scene 600 when 704 comes across them. Volumetric data (e.g. 3D depth data etc.) may be captured exclusively by the artificial reality device. The artificial reality device’s sensors may capture volumetric data (e.g., 3D depth data) even though it may not be capable of capturing all aspects of specific objects simultaneously. The artificial reality device might include stereoscopic cameras that allow depth capture of the objects being modeled. Alternately, the artificial truth device could be set up to use a wobble from one camera, or the fact that one or several cameras or other devices is being repositioned at different locations and angles relative to the objects being modelled (e.g. user 704 moves within the original immersive scene 600). This will facilitate depth capture and help to develop the dynamic volumetric model.

“To illustrate, FIG. FIG. 8A illustrates additional elements of dynamic volumetric modeling 700 that can be added incrementally by user 704 as they encounter (perceive/detect, etc.). New regions can be added to original immersive scene 600. As shown in FIG. FIG. 8A shows that when door 610-1-1 opens to reveal hallway 604, an area 802 can be seen. This viewable region is what user 704 or his or her artificial reality device may be able to detect or perceive. The door could also open to hallway 604. As shown, dynamic volumetric model 700 may include a portion of floor 610-2 (i.e. the flooring in hallway 604) and water fountain 606-4.

“Custom immersive scenes 702 can be dynamically generated in accordance with the incremental evolution of dynamic volumetric modeling 700. FIG. 8B illustrates additional elements of custom immersive scene 702 dynamically generated and updated, as an example. FIG. 8B shows additional elements of custom immersive scenes 702 that are dynamically generated using the dynamic volumetric model 700. 8A. 8A. Door 610-1-2 can be replaced with a similar door to replace door 610-1-1, or a different type of door. Door 610-1-2 could be to a men?s bathroom. System 100 can determine this based on a symbol on the door. Door 610-1-2 may therefore be replaced by a special door that corresponds to men’s toilets in the medieval customization scheme. This could include a themed symbol that indicates the room. Water fountain 606-4 could be a well-known object. System 100 may replace it with another custom object that is more in keeping with the medieval customization scheme in custom immersive scenes 702 and 606.

FIG. 9A illustrates the incremental development and evolution of dynamic volumetric modeling 700. FIG. 9A shows additional elements that could be added to dynamic model 700 as user 704 leaves conference 602 and enters hallway 604. FIG. FIG. 9A shows that user 704 has seen all of conference room 602 as well as hallway 604 and all of the original immersive scene 600 objects. Dynamic volumetric model 700 may include 6 more. As shown in the diagram, dynamic volumetric modeling 700 could include all flooring 610-2, 608-2, 608-3, 608-3, 608-3, 608-3, window 609-4-3, doors 610-1 and 610-1, and other walls 610-3.

“As shown in FIG. 8B. Custom immersive scenes 702 can also dynamically generated. These custom scenes may be updated according to the incremental development dynamic volumetric model 700. To illustrate, FIG. FIG. 9B illustrates additional elements of custom immersive scenes 702 that are dynamically generated using the dynamic volumetric model 700. 9A. 9A. Plant 608-3, mail cart 608-2, and plant 608-3 are not known objects. System 100 may not replace them (i.e. as indicated by the absence of shading in FIG. 9B).”

“In some cases, system 100 might determine that an original immersive scenes includes a real-world scene in a scene library. This is associated with the chosen customization scheme. System 100 can dynamically create an immersive scene using the selected customization scheme. This is done by customizing an immersive scene according at least one of the preconfigured customizations that are included in the scene library.

“For example, user 704 may go to an office that has original immersive scene 600 every single day. System 100 could determine (e.g. when user 704 causes their artificial reality device or other device to start a new scene customization session), that the original immersive scene 600 is recorded in a scene library that corresponds to the customization scheme chosen by user 704. You can describe, detail, or document various pre-configured customizations within the scene library to allow original immersive scene 600 be customized in a specific way. User 704 might create, for example, preconfigured customizations that correspond to specific elements of original immersive scenes 600 by documenting them in the scene library. Although most of the floors and walls in the original immersive scene 600 office space are made of stone, 704 can create a preconfigured customization to the walls and floors of his or her office, to make them appear gold-plated. As an example, although most office chairs can be replaced with sturdy chairs made of iron and wood, 704 could create a preconfigured customization that makes a specific chair, such as a chair in their office, appear like a jewel-encrusted throne.

Preconfigured customizations can also be used by users to implement the other examples. A user might use a preconfigured customization, for example, to place a piece of art in a specific location in their home, to modify the design of a building or to modify any object they encounter in their daily lives.

“System 100 could identify an object for which a preconfigured customization has been assigned in any appropriate way. System 100, for example, may use location sensors (e.g. GPS sensors, pedometers and accelerometers), compasses and/or other suitable sensors to identify the real-world scene or part thereof that the user is viewing. This will allow it to determine if preconfigured customizations can be applied to objects in that real-world scene.

“In some cases, a user can share a customization plan with another user to allow both to experience the same immersive scene. To illustrate, FIG. FIG. 10 illustrates a scenario 1000 where a plurality users experience original immersive scene 600 together. FIG. 10 user 704 might be in conference room 602 and be participating in a scene customization session where the medieval customization scheme will be applied to original immersive scenes 600. While scene customization is in progress, user 1002 (e.g. another person using an artificial reality device) can enter original immersive scene 600 by entering conference room 602, as shown. The system 100, which includes the artificial reality device used to create the scene, may be used by 704 to determine that 1002 is in the original immersive scene 600. This system 100 could provide data representative of an immersive scene presented to user 704 (e.g. custom immersive scene 700) in response to user 1002’s determination that 1002 has entered original scene 600.

“The data representative for the custom immersive scenes may be configured to enable another dynamic generation and presentation of the custom scene to user 1002 through the artificial reality device of user 1002. If user 704 has chosen a medieval customization plan to provide an enhanced reality experience, the artificial truth device may detect user 1002’s presence and transmit data representative the medieval customization schema to user 1002 so that user 1002 can experience custom immersive scenes 702 the same way as user 704. User 1002 might accept an invitation to enter custom immersive scene 702. This would allow both users to see custom immersive scenes 702 and 704 in a way that makes them appear to be part of the medieval reality.

User 1002 could also decline to participate in custom immersive scene 702 together with user 704, and instead may initiate a scene customization session where a different customization scheme (e.g. a futuristic customization plan) is applied to original immersive scenes 600. User 1002 could appear to user 704, in a medieval world. User 704 may simultaneously appear to user 1002, in a futuristic world. Users 704 and 1002 actually reside in conference room 602 of the original immersive scene 600.

“System 100 could provide data that is representative of the custom immersive scene being presented user 704 in whatever way may be necessary to support a particular implementation. As an example, system 100 could send user 1002 an invitation to use the same customization scheme as user 704. System 100 could automatically create a “friend list”. Users located in the area of user 704 can be populated so that user 704 can select which users to send invitations. Other examples include users who may “enter?” Another example is that multiple users may?enter? a custom immersive scene that is associated with a specific customization scheme by signing into the scene customization session in a manner similar to conference call participants calling into virtual conference rooms. Another example is user 704, which has data about a first custom immersive scenes that he experienced (e.g. data representing a dynamic volumetric modeling and/or associated original or custom objects that could be replaced within the dynamic volumestric model during a scene customization session). This data may be shared with user 1004. This allows for the creation of a second custom immersive scenes based on the first. The second user may then experience it.

“FIG. 11. illustrates an example method 1100 to dynamically customize a scene for presentation. FIG. FIG. 11. illustrates an example operation according to one embodiment. Other embodiments could omit, add, reorder and/or modify any operations shown in FIG. 11. FIG. 11, may be performed by system 100 or any other implementation of it.”

“In operation 1102, a custom scene system can detect a user’s selection of a customization scheme that will be applied to an original immersive setting during a scene customization session. This scene customization session may be experienced by the user using an artificial reality device. Operation 1102 can be executed in any of these ways.

“In operation 1104, the scene customize system can dynamically create an immersive scene based upon the application of the selected customization (i.e. the customization scheme that was detected by the user in operation 1102) to an original immersive scene. Operation 1104 can be executed in any of these ways. FIG. 11. Operation 1104 can be performed by the scene customisation system performing operations 1106 to 1112 as described below. Operation 1104 can be carried out continuously in some cases by the scene customization system performing operations 1106 through 112.

“In operation 1106, data may be received by the scene customization system that is representative of the original immersive scenes. Operation 1106 can be executed in any of these ways.

“In operation 1108, the scene customisation system may create a dynamic volumetric model from the original immersive scenes based on the received data. Operation 1108 can be done in any of these ways.

“In operation 1110 the scene customization system might identify one or more objects within the original immersive scene. The scene customization system could identify, for example, one or more objects in the dynamic volumetric model that is part of the original immersive scenes. Operation 1110 can be done in any of these ways.

“In operation 1112 the scene customization system can replace one or more of the objects identified in operation 110 with one or several custom objects. Operation 1112 can be carried out in any of these ways. The scene customization system can replace one or more objects in the dynamic volumetric model, and it will be done according to the applied customization scheme.

“In operation 1114 the scene customization system might present the custom immersive scene via the artificial reality device. The scene customization system might present the custom immersive scene to the user during the scene customization session. Operation 1114 can be executed in any of these ways.

“FIG. 12. illustrates an example method 1200 to dynamically customize a scene for presentation. FIG. 12 illustrates exemplary operations according to one embodiment. Other embodiments may omit, add to, reorder and/or modify any of the operations shown in FIG. 12. FIG. 12. illustrates an example operation according to one embodiment. Other embodiments could omit, add, reorder and/or modify any operations shown in FIG. 12. FIG. System 100, or any other implementation thereof, may perform 12 of the operations shown in FIG.

“In operation 1202, an scene customization system might detect a user selecting a customization scheme from the library of customizations provided by an augmented realty provider. The customization scheme could be applied to a real scene around the user during a scene customization session. This can be done using an immersive headset device. Operation 1202 can be done in any of these ways.

Summary for “Methods, systems and methods for customizing scenes for presentation to users”

“Advancements in computing and networking technology made it possible to create new forms of artificial reality (e.g. virtual reality or augmented reality, diminished reality or mixed reality). possible. Virtual reality technology, for example, may enable a user to experience an immersive and interactive virtual reality world. This environment is entirely different from the one that surrounds them in reality. The immersive virtual reality world can be viewed in any direction, including forward, backwards, left, right and up, as well as down. The user can feel as though he or she is in the immersive virtual reality world and are there. Another example is that augmented reality technology can allow users to experience an immersive and interactive augmented reality world. This environment may be based on or closely related to the user’s actual surroundings. An augmented reality experience may allow the user to perceive different aspects of the environment around them, but may also include virtual objects and/or other elements that may enhance the environment.

“Despite the many advances in artificial reality technology, many users still segregate artificial reality experiences and real life experiences that have not been altered or created artificially. Users may, for example, go to work each day and carry out their normal business without the use of artificial reality technology. However, they may then engage in artificial reality sessions (e.g., viewing a virtual reality media program or playing an augmented reality video game). Only after work is done and/or any other obligations have been fulfilled. As computing, networking and other technologies improve, personalized and enhanced artificial realities technologies will become more common in many people’s daily lives. There are many opportunities to make artificial reality technology more accessible to more people. This will allow them to be more productive, and give them more control over their environment. It also allows users to have more fun and create environments that are more appealing, diverse, practical, attractive, and/or enjoyable.

Herein are described methods and systems that dynamically customize a scene to present to a user. A?scene? is defined herein. A scene or an immersive scene is a scene. may refer to any real-world or artificially-generated (i.e., fully or partially virtual) environment or world in which a user may be located and/or which the user may experience. More specifically, a place or environment within the real world (i.e., as opposed to a virtual or artificially-generated world) in which the user may be located or desire to be located may be referred to herein as a ?real-world scene.? Real-world scenes can refer to any real world scenery, real locations, or real-world events (e.g. live events). Other places or environments that exist in reality, rather than only virtual. A real-world scene could include an indoor or outdoor location, such as a street, museum, scenic landscape, outer space, the surface of another planet or other real-world locations.

In some cases, a real world scene could refer to the scene that surrounds the user. In other instances, a real world scene might refer to the scene that surrounds the user. A real-world scene can be associated with an actual-world event like a basketball match, an Olympic event, or other sporting events. ), a concert (e.g. a concert in a large venue or a classical chamber concert held in an intimate setting, etc. ), A theatrical performance (e.g., a Broadway musical or an outdoor pageant). ), a large-scale celebration (e.g. New Year’s Eve in Times Square, Mardis Gras etc. ), a race (e.g. a stock-car race or a horse race). ), A political event (e.g. a presidential debate, a convention, etc. ), or any other real world event that might be of interest to potential users. The real-world scene can be used as a backdrop for a fictionalized scene, such as a virtual reality TV show or movie, or any other scene in any indoor or outdoor location that may be useful for a particular application.

“Although these immersive scenes may not be actual scenes, they may at least partially be artificially created. Some immersive scenes, for example, may be entirely virtual (e.g. computer-generated virtual environments used in videogames). Others immersive scenes can be based on real world scenes but may contain one or more virtual elements or objects that have been added to or replaced real objects in the real-world scene. As an example, custom immersive scenes can be dynamically created from original immersive scenes, such as real-world scenes, or artificially generated scenes. This is done by replacing the original objects in the original scenes with custom objects using the methods and systems described herein.

“As used herein an ‘object? (e.g. an original object or a custom object) Any living or inanimate object that is included in a scene may be included. As an example, in a scene that includes a conference room within an office setting, the objects may include chairs around the conference tables, people in the room, and their effects (e.g. clothing, papers and briefcases, coffee cups, etc. A floor, ceiling, doors, windows, and walls around the room are all possible.

“A scene customization system can dynamically modify a scene (e.g. customizing an original immersive scenes to dynamically generate an immersive scene) to present to a user in any manner described herein or as may be necessary to implement a particular implementation. The scene customization system could detect a user’s selection of a customization scheme that will be applied to an original immersive scenes during a scene customization session. This customization session can be enjoyed by the user using an artificial reality device. An immersive headset device, or any other device capable of providing an artificial reality experience (e.g., virtual reality, augmented reality, etc.) may be used to implement the artificial reality device. As described below,

“As used herein, a ?customization scheme? Any theme, concept or motif may be referred to. Any theme, concept, motif or filter that allows customization of an immersive scene. You can create customization schemes based on themes that relate to historical periods (e.g., a medieval European or 19th Century New York scheme), etc. ; time periods that are popular, nostalgic, or of other significance (e.g. a 50s diner plan, a 20s club program, etc. ; popular fiction (e.g., Star Trek USS Enterprise, Harry Potter Hogwarts, etc. ); particular aesthetics (e.g., a futuristic scheme, a fantasy castle scheme, a beach scheme, etc. ); Real-world locations (e.g., a Venice Italy plan, a scheme that is based on another planet, etc. ); and/or any other motif, theme, or idea that may be useful in a specific implementation. A customization scheme can be implemented in other ways. A customization scheme could be customized to a specific location, such as a user’s yard, home, or office. However, it may not be based on the same themes as those mentioned above. The customization scheme could include art, such as favorite paintings or sculptures. ), design (e.g. old English brick, brushed aluminium paneling, etc. ), and/or any other objects or elements that may not exist at the specific location but which the user wishes to see at the location during scene customization sessions.

“A scene customization session” is the term used herein. A scene customization session is a time when a specific customization scheme is applied to an immersive scene. Any length of scene customization session may be used to suit a specific implementation. A user may experience the real world through an immersive headset device. The device can be worn throughout the day so that the entire user’s house, commute, office, and other areas are affected by the customization scheme. This gives the user the feeling that they are living and working in Hogwarts or in medieval Europe. Another example is a scene customization session that lasts less than an hour. To provide a temporary distraction similar to traditional media content like movies, television, and/or games. Some scenes customization sessions can be linked in a sequence. In some cases, scene customization sessions may follow one another in sequence.

“Once the user has chosen a customization scheme, scene customization system can dynamically generate an immersive scene based upon the application of the selected customization schema to the original immersive scene. The scene customization system can dynamically create the custom immersive scene using data from the original immersive scene. The original immersive scene may include the real-world scene surrounding the user in the case of an augmented reality experience, a distinct real-world scene or another scene (e.g., an artificially-generated scene) in the case of a virtual reality experience, or any other original immersive scene as may serve a particular implementation. The scene customization system can, based on the data received, develop (e.g. create, update, and/or continuously update) a dynamic volumetric modeling of the original immersive scenes. The scene customization system can identify objects in the original immersive scenes and replace them with custom objects within the dynamic volumestric model. This is in accordance to the applied customization scheme. The artificial reality device may be used to present the scene customization system’s dynamically generated immersive scene to the user. Below are some examples of how dynamically creating and presenting an immersive scene during a scene customization session.

A scene customization system can provide many benefits by dynamically customizing scenes for presentation to users according to the methods and systems described herein. For example, users might feel less inclined than before to separate augmented reality experiences and real life experiences. Instead, they may be more open to integrating augmented reality into their lives to enhance their everyday experiences. One example is that users can personalize certain aspects of their lives in order to make them more attractive and/or more diverse. Another example is that instead of commuting on the same train or working in the exact same cubicle every day, users can customize their commutes and office settings to create the illusion that they are in a different place each day. This gives them the feeling that they are traveling through something more beautiful and interesting than what is actually happening in real life. Even though virtual reality experiences might not be integrated with the user’s daily life in the same way as augmented reality experiences, there are still benefits. Instead of watching a football match broadcast in virtual reality, such as the field and stadium, one can experience the game from the perspective of the actual scene. The football game actually takes place inside the user’s virtual reality experience. A user can experience the virtual reality football match as though the stadium is on top of a mountain with harrowing cliffs surrounding it, or any other customization that may be necessary for a specific implementation.

The systems and methods described in this document may provide more than just aesthetic, entertainment, or experience diversification benefits. They could also have social and/or functional benefits. Some customization schemes can include information, such as a newspaper or website. These elements can be used to replace or add to the original immersive scene in order to provide more information to the user. A table surface that the user uses for breakfast may be replaced with top news stories that the user can access during the meal. Dynamic scene customization, as described below, can facilitate and/or enhance social interactions when users share customization strategies and/or scene customization session, co-inhabit customized immersive scenes based upon shared customization schemes, or other such activities.

“Various embodiments will be described in greater detail using the figures. These disclosed methods and systems could provide some of the benefits previously mentioned, as well as other benefits.

“FIG. “FIG. This dynamically customizes scenes for presentation to users. System 100 could include, but is not limited to, a management facility 101, a custom immersive scene generation area 104, and an storage facility 106 that are selectively and communicatively connected to each other. Facilities 102 to 106 are clearly shown in FIG. 1 Facilities 102 to 106 can be combined, e.g. combined into one facility or split into multiple facilities as may be necessary for a particular implementation.

Management facility 102 can receive, generate, analyze and present data (e.g. data representative of user inputs, data stored at storage facility 106 or data generated by management facility 102). To facilitate dynamic customization of a scene to be presented to a user in whatever way is necessary to achieve a specific implementation. Management facility 102, for example, may detect the selection by a user of a customization scheme that will be applied to an original immersive setting during a scene customization session. This is done by receiving and analysing input from a user. Management facility 102 can also generate or receive data that represents an original immersive scene in which the customized scheme is to be applied, as appropriate for a particular implementation. Below are examples of data that management facility 102 can receive and generate representative of an original immersive scenario. The management facility 102 can also present to the users (e.g. via an artificial reality device like an immersive headset or mobile computing device) an original immersive scene created dynamically by custom immersive scene generator facility 104. Management facility 102 can also perform other operations that may be necessary to implement the systems and methods described in this document.

“Custom immersive scene generator facility 104 can generate an immersive scene based upon an original immersive scene in any manner described herein or as may be required for a specific implementation. A custom immersive scene generator facility 104 might apply a customized scheme (e.g. as detected by management facility 1012) to an original immersive sequence (e.g. generated by management facility 1012) to create the custom immersive scenes. This custom scene may then be transmitted back to management facility102 (e.g. via storage facility 106) for presentation. Custom immersive scene generator 104 can receive data from the original immersive scenes, create a dynamic volumetric map of that scene, identify and/or replace one or more of those objects with one or several custom objects, or any other way as may be necessary to generate the custom immersive scenario.

“In some cases, custom immersive scene generator facility 104 may perform the operations and/or other operations necessary to serve a specific implementation in real-time so that the user can be presented with an up-to-date and timely experience (e.g., one that correlates with the events that occur in the scene around the user). Although data processing and distribution can take some time, it is possible for users to have a customized immersive scene that is fully synchronized with the original immersive scenes upon which they were based. However, the operations described herein are considered to be ‘in real-time? When the operations are done immediately and without undue delays. A user can experience a custom immersive scenes in real-time even though they occur only a few minutes after the actual occurrences.

“Custom immersive scene generation facility (104) may be used to support real-time dynamic volumestric modeling and the experiencing of custom immersive scenarios based upon real-world immersive scenes. It may also include any configuration of hardware resources that may be required to provide processing for real-time creation of complex custom immersive stories based on real data representative of dynamic volumestric models of original immersive shots. In some cases, hardware resources could include multiple servers with multiple processing units. Other implementations may integrate hardware resources to dynamically generate the custom immersive scene into an artificial reality device like an immersive headset, mobile computing device or the like. Below are details about the custom immersive scene generation facility (104), including examples of operations mentioned above.

“Storage facility106 may contain and keep any data generated, managed and maintained, presented, or used in any other way by facilities 102 or 104 in a particular implementation. As an example, storage facility106 could include library data108. This may include data that is related to predefined or custom (e.g. user-defined) customizations, or preconfigured customizations that can be applied to original immersive scenes. An artificial reality provider may include, for example, a virtual reality content providers, an augmented reality content providers, etc. A library of customization options, such as the ones described herein, may be provided by an artificial reality provider. Some customization schemes can be modified or defined by the user. The library could include a basic or generic medieval customization scheme as well as a customized medieval customization scheme that the user has created or accessed without the provider’s permission

“Library Data 108 could include data that is associated with each customization scheme within the library. Each customization scheme could be associated with data that represents models of known objects or known object types. This information can be used to identify objects in original immersive scenes and determine if and how custom objects should replace them. Data representative of custom objects may be included in each customization scheme. These custom objects may replace the original objects in the custom immersive scenes. Library data 108, for example, may contain a model (e.g. a car with a specific make and year) and a model (e.g. a vehicle that matches the customization scheme). The model is used to replace the vehicle in the custom immersive scenes when it is identified by the custom immersive scenario generation facility 104. Library data 108 could also contain any data that may be relevant to a specific implementation. Library data 108 could include data about preconfigured customizations that can be applied to real-world scenes.

“As shown at FIG. Storage facility 106 may also contain original immersive scene information 110 and custom immersive data 112. Original immersive scene data 110 could include data representative for an original immersive scene. This data may be received from scene capture device (e.g. cameras, 3D (?3D?)). depth modeling devices, etc.) These devices may, for instance, be integrated into management facility 102. Scene capture devices, for example, may record data about the location and appearance of objects in an original immersive environment surrounding the user. This data may then be used (e.g. by custom immersive scene generator facility 104) to create a dynamic volumetric model that includes all the objects. Original immersive scene data 110 could include raw data from the original immersive scenes that were received. Data representative of the dynamic volumestric model of the original immersive scenes is also created (e.g., received, maintained, and updated) by the custom immersive scene generator facility 104. Original immersive scene data 110 could also include locational coordinates, and/or any other data related to real-world scenes.

“Similarly, custom immersive scenes data 112 could include data that represents the custom immersive scenario created by applying a customization scheme or one or more preconfigured customizations. Custom immersive scene data 112 could include data that can be used to replace the original objects with custom ones within the dynamic volumetric model.

“FIG. 2. This diagram illustrates an example configuration 200, in which system 100 dynamically creates an original immersive scene to present to an exemplary user. Configuration 200 contains an artificial reality device (202), which is embedded in a real-world scene (204). Artificial reality device (202) is linked to (e.g. being used by) another user 206 located within real-world scenario 204. Artificial reality device 200 is also communicatively connected, via a network 208, with an artificial reality provider service server 210 (?providerserver 210). ), which can be distant from artificial reality device202 and user206 (e.g. in a different location from real-world scene 204).

“Configuration 200 could include an example implementation of system 100, which may dynamically create an immersive scene for presentation to the user 206. System 100, for example, may be implemented using hardware and/or code within artificial reality devices 202, 210 hardware and/or 210 software, and/or distributed across both provider server 220 and artificial reality device 223. It also communicates via network 208

“System 100 may be implemented in configuration 200 to dynamically create an immersive scene for presentation to the user 206 as an enhanced reality experience. As shown, the original immersive scene can include real-world scenes 204 around the user during a scene customization session. System 100 in configuration 200 may be able to receive data representative for the original immersive scene. This could include, for instance, directly recording data from real-world scene 244 during the scene customization session. Artificial reality device 202 could include hardware components, such as one or more cameras, 3D depth modelling equipment, and the like. It can be used to capture data representative of real world scene 204. For example, it may detect light reflections from objects on real-world scenes 204. As will be explained below, some examples show how data representative of real world scene 204 around the user (i.e. the data upon which creating and continually updating the dynamic volumetric models may be based) can be captured solely by one or more artificial reality devices 202 being used 206 by the user to experience a scene customization.

“Based on directly captured data (e.g. video captured by an artificial reality device 202, etc. System 100 may create a dynamic volumetric model from the original immersive scene in user 206 (i.e. real-world scene number 204). System 100, for example, may continuously update and create a dynamic volumetric model 204 of the real-world scene surrounding user 206. This data representative of real world scene 204 is captured by artificial reality device 200 throughout the scene customization session.

“Once system 100 develops or begins developing (i.e., creating, updating, maintaining, etc.) System 100 can identify and replace objects in the dynamic volumetric modeling of a real-world scene (204), to create a custom immersive environment. System 100 can also present user 206 the custom immersive scene. This will be explained and illustrated further below. FIG. Now, we will describe each of the elements shown in FIG.

“Artificial Reality Device 202” may include any number of devices that user 206 uses to access and experience an immersive scene created by system 100. Artificial reality device 202 can be used to present a custom immersive scene. Artificial reality device 202 could include an immersive headset device. This includes a head mounted virtual reality device, head-mounted augmented reality, virtual reality gaming devices, and so on. ), a personal computer device (e.g., a desktop computer, laptop computer, etc. ), A mobile or wireless device (e.g. a smartphone, tablet device, mobile reader, etc. Any other device or combination of devices that can be used to present a customized immersive scene. Below are examples of different types of artificial reality devices which may be used to implement artificial reality device 022. Different types of artificial realities devices can provide different experiences or levels of immersion for users 202, as will be explained.

“In configuration 200 real-world scenario 204 could represent any of the real-world scenes described herein, or any other suitable real world scene. Real-world scene (204) may be the environment or scene in which user 206 and artificial reality device 200 are located. Real-world scene 200 may change depending on how artificial reality device 202/user 206 move around the world. Real-world scene 204 could be user 206’s office while user 206 works, or the train station and/or real-world environment surrounding the train, as user 206 commutes home from work. User 206 may also live in the home of user 206 when user 206 returns home. Configuration 200 may contain or constitute real-world scene 204, which could include the original immersive scene on which system 100 is based a dynamically generated immersive scene. Real-world scene 204 may only include a single room where user 206 is situated. Other examples may show real-world scene 204, which may include a large office building where user 206 might work, large outdoor areas in which user 226 is located, and so forth .).”

“Network208” may contain any provider-specific network (e.g. a cable or satellite carrier networks or a mobile phone network), the Internet, wide-area network or any other suitable network. Any communication technology, device, media, or protocol may be used to transfer data between provider 210 and artificial reality device 202.2. Artificial reality device 202 may communicate with provider server 210 using any appropriate communication technologies, devices media, and/or protocols supporting data communications. This includes socket connections, Ethernet and data bus technologies, communication devices, Transmission Control Protocol, (?TCP?) and other devices. ), Internet Protocol?IP? ), Internet Protocol (?IP? ), Telnet, Hypertext Transfer Protocol (?HTTP? ), HTTPS. Session Initiation Protocol, (?SIP) ), Simple Object Access Protocol, (?SOAP) ), Simple Object Access Protocol (?SOAP) ?XML?????????????????????????????????RTP??????????????????????????????XML??XML??????XML??XML??XML??XML???????RTP?RTP?RTP?RTP?RTP?RTP?RTP?RTP?RTP?RTP?RTT?RTP?RTP?RTP?RTP? ), User Datagram Protocol? (?UDP?) ), Global System for Mobile Communications?GSM? technologies, Code Division Multiple Access(?CDMA?) technologies, Code Division Multiple Access (?CDMA?) ), 4G Long Term evolution (?LTE) ), Voice over IP (?VoIP? ), Voice over LTE (?VoLTE? WiMax, Time Division Multiple Access?TDMA? technologies, Short Message Services (?SMS) ), Multimedia Message Service? (?MMS?) ), radio frequency (?RF?) radio frequency (?RF?) signaling technologies, wireless communication technologies (e.g., Bluetooth, Wi-Fi etc. ), and/or other suitable communications technology. Although only one network 208 was shown to connect artificial reality device 200 and provider server 220 in configuration 200, it is recognized that artificial reality devices 202 and 210 could communicate via multiple interconnected wired and wireless networks. This may be useful for a specific implementation.

“Provider server210 may contain one or more servers, or other computing devices, associated with (e.g. provided and/or managed) an artificial reality content service provider (e.g. a network provider, cable provider, satellite provider, Internet service provider or provider of artificial reality mobile apps, etc.). Provider server 210, in some cases, could be part of system 100. It may, however, perform or facilitate artificial truth device 202 in performing any operations described herein. Provider server 210, for example, may communicate with artificial reality devices 202 to obtain data representative of real-world scenes 204. It may also include processing resources that allow the development or facilitation of dynamic volumetric models based on real world scene 204. The provider server 210 could also contain processing resources that are used to identify and substitute objects in the dynamic volumetric model, and/or perform other operations to dynamically generate the immersive scene for user 206 via artificial reality device.202. Other examples show that provider server 210 could be distinct from (e.g. and communicatively coupled with) system 100. This may allow system 100 to dynamically customize a scene for presentation 206 to user 206 in any manner that may serve a particular implementation. Another example is provider server 210, which may implement system 100 completely and present the custom immersive scenes to user 206. It may do this by providing data representative to the custom immersive scenes to artificial reality device 200 for presentation to user.

Provider server 210 can store, provide and/or facilitate the creation or modification of customizations that could be applied to real-world scenes 204. Provider server 210, for example, may store predefined (e.g. provider-defined) and/or user defined customization schemes along with data associated to the customization schemes described herein (e.g. models of known objects or custom objects). The provider server 210 can also be used to facilitate the selection, access, modification, creation, and other interactions with customization schemes (e.g. via an interface on artificial reality device 202.

“FIG. “FIG. Configuration 300 could have some similarities to configuration 200, such as the inclusion of certain elements from configuration 200. As an example, configuration 300 contains artificial reality device 200, which is included in real-world scene.204. As with configuration 200 artificial reality device202 is associated to user 206, who is also located in real-world scene204. Artificial reality device 200 is also communicatively connected, via network 208 with provider server210. This may be in remote locations from artificial reality devices 202 and 206 (e.g., a different location from real-world scenario 204).

“Like configuration 200 and configuration 300, configuration 300 may include an example implementation of system 100 which may dynamically adjust an immersive scene to present to the user 206. System 100, for example, may be implemented using hardware and/or code within artificial reality devices 202, 210 hardware and/or 210 software, and/or distributed across both provider server 220 and artificial reality device 200 and that communicates over network 208. In some embodiments, system 100 implementation may include or be communicatively linked with camera 302 (e.g. via network 208)

“Contrary to configuration 200, where the implementation system 100 may be configured for user 206 to present an augmented reality experience to him/her, system 100 shown configuration 300 may be used to dynamically create an immersive scene to be presented to user 206 in virtual reality. As shown, the original immersive scene can include, or may be the same as, real-world scenes 304. This may differ from real-world scenes 204 around the user during a scene customization session. User 206 might experience an immersive scene that is different from the real-world scene 200. Real-world scene 200 may be associated to user 206’s home or work place, but real-world scene304 could be associated with an actual event (e.g. a concert, sporting event, etc.). ), A fictionalized program (e.g. a virtual reality TV show, etc. ), an exotic place that is difficult or costly to reach (e.g., another country, an underwater expedition, etc. 206), or another real-world scene that might be of interest to the user. In some examples, the virtual reality experience provided by configuration 300 may include an artificially-generated scene in place of real-world scene 304, such that camera 302 may not be employed.”

System 100 may be able to receive data representative for the original immersive scene (real-world scene 304 instead of real-world scenes 204) and can then develop the dynamic volumetric model differently in the virtual reality example 300 than the one described for the augmented-reality example 200. Instead of receiving data representative for the original immersive scene, such as by directly capturing real-world scenes 304, which may prove impossible, system 100 could receive data from a scene modeling software that has elements located in the vicinity of real world scene 304. This would capture data representative to real-world scenes 304.

“For example, scene modeling systems may include camera 302, and possibly one or more cameras or other equipment (e.g. hardware, software, etc.). For capturing real-world scene data 304 and/or modeling real-world scenes 304. The scene modeling system, including camera 302, may be situated in the vicinity (e.g. within) real world scene 304. Components of the scene model system, such as camera 302, may capture and transmit data representative of real scene 304 via network 208, to be received by system 100 (e.g. by provider server 220 and/or artificial reality device 202). The implementation of system 100 may create a dynamic volumetric model for the original immersive scene (i.e. real-world scene 304) in a different way than system 100. System 100 in configuration 300 could receive data that is representative of the dynamic model of real-world scene 304, rather than continuously updating and creating the dynamic volumetric models. This may be done by receiving data from the scene modeling software. The scene modeling system may create data that represents the dynamic volumetric model for real-world scene 306 (e.g., using data directly captured by camera 302 which is representative of real world scene 304). System 100 can also continuously update the dynamic volumetric model for real-world scene304 based upon the received data transmitted to it by the scene modelling system. (e.g., camera 302 continues to capture data that is representative of real world scene 304)

“As in the configuration 200, once system 100 in configuration 300 is implemented or starts developing (i.e. receiving, updating and maintaining, etc.), then configuration 300 becomes operational. System 100 can identify and replace objects in the dynamic volumetric modeling of the real-world scene. This will create a custom immersive scene. System 100 can also present user 206 the custom immersive scene via artificial reality device202. In some cases, the artificial reality device 202 and real-world scene 200, network 208 and provider server 210 may also perform similar functions in configuration 300. These functions may be used to implement virtual reality functionality in configuration 300. Artificial reality device 202 could be another type of artificial reality devices (e.g. an artificial reality device that is more suited to providing virtual reality experiences than augmented reality ones). Provider server 210 may also perform additional functionality in order to implement the virtual reality experience provided under configuration 300. This may not be possible for the augmented reality experience provided under configuration 200. Provider server 210 may be used in conjunction with the above-described scene modeling system (i.e. camera 302 is used to capture data representative for real-world scenes 304) to create, distribute, and capture the dynamic volumetric model real-world scene.304.

The scene modeling system with camera 302 can capture real-world scene data 304 and create dynamic volumetric models of real-world scenes 304 in any way that is suitable. The scene modeling system could include one or more cameras (i.e. camera 302) or other equipment that is located within or around real world scene 304. It may also be configured to capture data representative objects within real-world scenes 304 (e.g. depth data, texture and/or any other data used in 3D modeling). The scene modeling system can capture data and/or create dynamic volumetric models of real-world scene304 in certain instances. This is in accordance with U.S. Patent Application Ser. No. No. 15/141 707, filed April. 28.2016, and entitled METHODS & SYSTEMS TO CREATE AND PROVIDING VOLUMETRIC REPRESENTATIONS OF REAL-WORLD EVENTS and/or copending U.S. Patent Application Ser. No. No. 15/141 717, filed April. 28, 2016, and entitled METHODS AND SYSTEMS FOR CREATING AND MANIPULATING AN INDIVIDUALLY-MANIPULABLE VOLUMETRIC MODEL OF AN OBJECT. Both of these applications are hereby included by reference in their entirety.

“FIG. 4. illustrates an example artificial reality experience 400 where user 206 is presented an exemplary field view 402 of an immersive scene 404. Experience 400 could be, for example, an augmented reality experience. It may be provided by system 100, such as the configuration 200 in FIG. 2), or a virtual reality experience (e.g. provided by an implementation system 100 like that shown in configuration 300 of FIG. 3), or any other suitable type of experience whereby an immersive scene (e.g., an original or a custom immersive scene based on a real-world or artificially-generated scene) is presented to user 206 by way of an artificial reality device.”

“User206 can experience immersive scene 402 by giving user input to dynamically alter field of view 402 in order to display any content from immersive scene 402. User 206 might indicate that he/she wants to view a part of immersive scene 402 that isn’t currently in the field of view 402. An artificial reality device like a personal computer or mobile computing device may use user input to indicate that they wish to view a part of immersive scene 404. This input could include mouse movements, input from a keyboard navigation key, swipe gestures, and the like. An artificial reality device that incorporates particular sensors (e.g. motion sensors, direction sensors, orientation sensors, etc. An immersive headset device may require that the user input includes a change in the orientation of the artificial reality device’s display screen with respect to at most one of two orthogonal directions. The artificial reality device might be able to detect changes in the orientation of the screen relative to an x, y, and z axes. The artificial reality device that user 206 uses to experience artificial reality experience 406 may detect a change in the orientation of a display screen. This dynamic changing of content involves gradually replacing one immersive scene 404 part with another immersive scene 404 part as user 206 views it.

“To illustrate, FIG. FIG. 4 illustrates that immersive scene404 could be a real-world scene which includes a conference room in an workplace setting. User 206 may input user input to the artificial-reality device through which user 206 is experiencing immersive scenes 404 (e.g. artificial reality device 202) in order to indicate that user 206 wants to see to the left of the current field of view 402. User 206 might press the left navigation key on a keyboard and perform a swipe gesture towards the right. Or, user 206 could rotate his or her head left to change the orientation of the display screen in relation to a y axis while wearing an immersive headset. The objects in immersive scene 404 will be displayed as such (e.g. the conference table, chairs, windows, etc.). To give user 206 the sensation that he/she is looking to the left in immersive scenario 404, scroll to the right across the field of view 402. Immersive scene objects scroll to the right side of field view 402, allowing for new objects (e.g. objects not shown in FIG. The left side of field view 402 may be scrolled to 4) easily. User 206 can provide input to allow field of view 402 display any immersive scene 404 the user desires.

In certain cases, the field of view 402 can be displayed entirely on an opaque screen of an artificial-reality device. This could include a typical display screen from a tablet, mobile computing device, or laptop. For instance, if system 100 is presenting a virtual reality experience to user 206, a field of view of an immersive scene associated with a real-world scene distinct from the real-world scene surrounding user 206 or an artificially-generated scene may fill an entirety of the opaque display screen of the artificial reality device. A pass-through camera, such as one positioned behind an opaque screen of an artificial reality device, may also be used to present an augmented reality experience to user 100.

“In some other cases, field of view 402 can be displayed on a screen that’s (or may be made transparent), such as a screen of certain artificial realities devices specifically designed for augmented reality-type apps. If system 100 presents an augmented reality experience for user 206, some objects, such as objects in original immersive scenes, custom objects in custom immersive scenes, etc. may be visible to the user. User 206 may see the objects directly due to direct light passing through the partially transparent screen. Other objects (e.g. virtual objects, custom objects added to an immersive scene) may also be visible at the same time. You may see the objects in immersive scene 404 at different locations, such as projected onto the partially transparent screen or projected onto user 206’s retina (e.g. using a laser waveguide) and/or any other method that may be used to support a particular implementation.

“Different types artificial reality devices could provide different experiences to user 206 by presenting immersive scene 402 in different ways or receiving input from user. To illustrate, FIG. FIG. 5 shows various examples of artificial reality devices 500 that can be used to enable user 206 to experience immersive scene 404. Artificial reality device 202 can be implemented using any of the artificial reality gadgets 500, or any other suitable artificial reality device that operates on similar principles to artificial reality technology 500.

“An immersive headset device 502 may be used as an example of an artificial-reality device. It is a head-mounted virtual and augmented reality device. Each user’s eyes can see a separate display screen 504 within the immersive headset device 502. One display screen 504 can be shared by both eyes in some cases. For example, a single, partially-transparent display screen (e.g., an augmented reality visor) may be used. As shown in other examples, different display screens 504 (e.g. opaque or partially transparent screen screens) may be used within the immersive headset device 502 to display slightly different versions 402 of field view 402 (e.g. stereoscopic versions 402 that may have been captured by one or several stereoscopic cameras). This will give user 206 the illusion that immersive scene 402 is three-dimensional. Display screens 504 can be set up to fill user’s peripheral vision, regardless of whether they are partially transparent or opaque. This will give user 206 a feeling of realness.

Immersive headset device 502 could also include motion sensors (e.g. accelerometers), direction sensors (e.g. magnetometers), orientation sensor (e.g. gyroscopes) and/or any other suitable sensors to detect natural movements of user 206 during immersive scene 404. User 206 might provide input to indicate a desire for user 206 to move the field of view 402 by a specific amount in immersive scene. This could be done by turning one’s head in that direction, and then by the same amount. Immersive headset device 502 may offer user 206 a natural, hands-free experience. It does not require any console control and may allow for the most immersive artificial realities experience.

“A personal computer device 506 is another example of an artificial-reality device (e.g., desktop computer, laptop computer, etc.). User 206 may use a display screen 508 (e.g. a monitor) to view immersive scene 404. Display screen 508 may not offer the same immersive experience as immersive headset 502, and may not be able to provide a clear stereoscopic view of each user’s eyes. Personal computer device 506 might have other benefits, such as its widespread acceptance among casual users of artificial reality who may not be inclined or able to buy an immersive headset device. Personal computer device 506 can allow users to experience immersive scenes 404 (e.g. virtual reality content) using a standard web browser. The keyboard 510 may be used by user 206 to input data to personal computer device 506. This could include using the navigation keys on keyboard510 to move the field of view 402, and/or using a mouse512 to move the field of view 402. In some cases, combination keyboard 510/mouse 512 may be used for user input. For example, you could move field of view 402 using navigation keys on keyboard510. Or click or interact with objects in immersive scene 404 via mouse 512.

“Another example of an artificial-reality device is a mobile computing device 514 (e.g. a smartphone, tablet computer, or mobile reading device). User 206 may use a display screen 516 to view immersive scene 404. The mobile computing device 514 might combine the best of both immersive headsets and personal computers to offer the most versatile artificial reality device for experiencing immersive scenes 404. Mobile devices, just like personal computers, are very ubiquitous and can be used to access many more people than immersive headsets. Mobile devices can be set up to offer user 206 an immersive experience similar to immersive headsets, as many of them are equipped with motion sensors and directional sensors. For example, mobile computing device 514 may be configured to divide display screen 516 into two versions (e.g., stereoscopic versions) of field of view 402 and to fill the peripheral vision of user 206 when mobile computing device 514 is mounted to the head of user 206 using a relatively inexpensive and commercially-available mounting apparatus (e.g., a cardboard apparatus). Mobile computing device 514 can be used to enable immersive scene 404 viewing by receiving user input at arm’s reach (i.e. it is not attached to the user’s head but acts as a dynamic window that allows users to view immersive scene 404 from a handheld display), or other techniques that may be useful in a particular embodiment.

“Examples of artificial reality devices have been shown, but these examples are only examples and should not be taken as a guide. Any suitable device or combination of devices that can be used to present custom immersive scenes in accordance with the principles herein may be considered an artificial reality device. An artificial reality device could include either a tethered configuration (e.g. a tethered headset) or an untethered configuration (e.g. a display screen that is not connected to a processor). An immersive headset device, or another artificial reality device, may be used with a controller, such as a ring controller and/or handheld controller.

“An extended example will be given to show how system 100 can dynamically personalize a scene for presentation. This could include receiving data representative for a particular immersive scene, creating a dynamic volumetric map of that scene based on the data and then identifying and replacing objects in the dynamic volumetric modeling. This example will be discussed with regard to FIGS. 6-9B.”

“FIG. “FIG. 6″ illustrates various objects in an exemplary immersive scene 600. This scene can be dynamically customized using system 100. Certain objects are described and labeled in relation to FIG. FIGS. 6 also include certain objects labeled and described in relation to FIG. 7A-9B. It is important to note that FIG. FIG. 6 may not be labeled explicitly in FIGS. 7A-9B to reduce clutter and make the descriptions of the other figures more clear. All objects in original immersive scene 600 will be referenced in the description of any FIGS. 6-9B are illustrated and labeled in relation to FIG. 6.”

“Specifically, FIG. Original immersive scene 600 could be a real-world scene set in an office setting. FIG. 6 shows the original immersive scene 600, which includes a conference room 602 as well as a hallway 604, as illustrated in FIG. 6. A top view, cutaway. Due to doors that could be closed, etc., only items that are within conference room 602 or hallway 604 can be included (e.g. descriptions, representations within dynamic volumetric models etc.). FIGS. 6-9B. 6-9B. 6. However, it will be clear that similar principles can be used to extend dynamic volumetric modeling and to dynamically customize scenes for presentation to users when they move from conference room 602 to hallway 604 to new immersive scenes (e.g. to other rooms in the office setting to the outside world, etc

“The original immersive scene 600 objects could fall under one of the following three categories, depending on whether or not they are?known?. to system 100 (e.g. identifiable, using object recognition techniques, and similar by system 100).

Original immersive scene 600, which is a first-class object category, may contain known objects 606 to which system 100 can have access. System 100, for example, may have access (e.g. within storage facility 106) to data like 3D models, pictures and/or physical description data. This will allow system 100 to identify known objects 606 with particularity. As we will discuss below. As an example, known objects 606 could include a specific conference table 606-1 (e.g. a conference table with a particular brand name and model, for which system100 may have access to data), certain office chairs 606-2 (i.e. office chairs 606-2-1 through 600-2-8, which may all belong to the same brand and model, for which system100 may access information), beverages 606-3 (e.g. one or more cans soda for which 100 may have access to data), and a water fountain 606-4 (e.

“Introduction scene 600 could include unknown objects 608 from unknown types. System 100 may not have access the predefined data 608 about certain unknown objects and may not be able to identify unknown objects 608 based upon a known type of object (in contrast to the third category below). System 100 may not be able to identify unknown objects 608 and may not replace them with custom objects in dynamic volumetric models. System 100 may also have access to data about unknown objects 608 or may be able to identify unknown objects 608 using a known type of object. However, system 100 may not replace unknown objects 608 by custom objects due either to the existence of unknown objects 608 in a predefined, non-augmentable list, or because a particular customization plan does not include a custom item suitable for replacing unknown objects. FIG. FIG. 6: Unknown objects 608 in original immersive scene 600 could include a supply count 608-1 (e.g. a counter on which beverages 606-3 can be made available, which may also contain office supplies for meeting attendees in conference room 602) and a mail cart 608-2 (e.g. used by aides throughout the office place to transport items) and a plant 608-3.

Original immersive scene 600 could include unknown objects of known types. These objects are referred to as “recognized objects” 610. System 100 may not be able to access predefined data, so system 100 might not be able to identify recognized objects 606. This is in contrast to known objects 606 and allows system 100 to identify objects 610 with particularity (e.g. based on the brand or model of the object). System 100, however, may be able to identify recognized objects (610) based on known objects types that have characteristics for which system 100 has access to data. System 100 may be able to identify objects based on basic characteristics, such as location, orientation, size, color, texture and/or any other suitable characteristics. System 100 can identify recognized objects, 610, so system 100 could replace recognized objects, 610 in the dynamic volumetric model. This is even though recognized objects 610 might not be known or identified with particularity. FIG. FIG. 6 shows that recognized objects 610 may be found within original immersive scene 600. These include doors 610-1 (i.e. door 610-1-1 in conference room 602 or hallway 604 and other doors 610-1 through 610-1), flooring 610-2 (i.e. flooring 610-2 of conference room 602 as well as flooring 610-2 of hallway 604), various wall 610-3 (i.e. flooring 610-2 of conference room 602 with flooring 610-2 of hallway 604), and windows 610-4-3 in conference room 602 respectively and windows 610-4-2 in hallway 604 and windows 610-4-3 and 610-4-2 of the hallway 602 and windows 610-4-2 of hallway 602 and windows 610-4-3 and 610-4-3 from hallway 602 (both of conference room 602 610-4-2 of conference 602 and 610-4-2 of hallway 604)

FIG. illustrates how system 100 can dynamically personalize original immersive scene 600 to present to a user. FIG. 7A shows exemplary elements of a dynamic volumetric model 700 for original immersive scene 600. FIG. 7B shows exemplary elements of an original immersive scene 700 dynamically generated from dynamic volumetric model 700.

“A user 704 might be found in conference room 602, with the door to that conference room (i.e. door 610-1-1-1) closed. User 704 might want to customize the scene using an artificial reality device (not shown). User 704 might want to personalize the scene around user 704 using an augmented reality experience that is based on a specific customization scheme, such as a medieval customization plan. User 704 could choose the medieval customization plan (e.g. using an interface provided by user 704) to start a scene customization session where the medieval customization schema is to be applied in the original immersive scene 600.

“System 100 (e.g.) may be implemented either by an artificial reality device or another device such a provider server, both of which are described in the context of FIG. 2) may detect user 704’s selection of the medieval customization session. System 100 may respond by initiating a scene customization session. This session will apply the medieval customization scheme to original immersive scene 600 in order to dynamically create custom immersive scene 702. The scene customization session may also present the user with the artificial reality device during the scene customization session. System 100 might receive data from original immersive scenes 600 and 600. It may then develop a dynamic volumetric model 700 of the original immersive scenes 600. Then, it could identify within dynamic volumetric models 700 one or more of the objects in original immersive scenes 600 (e.g. known objects 606, recognized object 610, etc.). ”

“System 100 may be able to receive data from original immersive scene 600, and may develop dynamic volumetric model 700 using the data. This may be in any way that may benefit a particular implementation. As will be shown in greater detail below, system 100 could use sensors (e.g. depth sensors, cameras, etc.). User 704 may use the artificial reality device to scan or otherwise capture data about objects in original immersive scene 600. System 100 might receive data such as three-dimensional positional data and texture data. For the different objects. System 100 may use the received data to map a 3D depthmap (similar to a wireframe) of the objects. It may also overlay texture data onto the 3Ddepth map to create 3D volumetric models. FIG. FIG. 7A shows that system 100 can create dynamic volumetric models 700 of the original immersive scene 600. This could include conference tables 606-1 and 606-2, as well as supply counter 608-1, floor 610-2, and other objects .

“Because door610-1-1 is closed, system 100 may not be able to detect hallway 604 data to start integrating objects in hallway 604 into dynamic volumetric modeling 700. System 100 may start to model hallway 604 once door 610-1-1 opens (e.g. so that user 704 can see out into hallway 604). Or system 100 may have already modeled hallway 604 or part of it based upon previous modeling of hallway 604. Below is additional information about how dynamic volumetric modeling 700 might be created. 8A-9B.”

“System 100 may identify one of the objects in original immersive scene 600 (e.g. one or more of recognized objects 606, 610, etc.). Any way that may be useful for a particular application. System 100, for example, may identify one or more features in an object that is included in original immersive scene 600 based on data received from the artificial reality device associated to user 704 or incorporated into dynamic volumetric model 700. System 100, for example, may identify one or several predefined markers, unique features, such as styles, corners and sizes or proportions. Conference table 606-1. System 100 can match one or more of the features of the object (e.g. conference table 606-1), with one or several corresponding features of a well-known object (e.g. the brand and model for conference table 606-1), documented in an object library that is associated with the chosen customization scheme. System 100 could determine that the specific object is an instance the known object in the object library based on the match of the one or more of its features. System 100 could determine, for example, that conference table 606-1 has a specific brand and model. to system 100

System 100 can also identify, using the original immersive scene 600 data, one or more of the features of an object, such as flooring 610-2-1. System 100 might identify, for example, flooring 610-2-1’s carpeted texture, large floor area (i.e. covering entire room), floor position relative to other objects (i.e. running horizontally beneath other objects), or other features. System 100 might conclude that the features of flooring 610-2-1 are characteristics of a particular object. This could be based on the object library associated to the chosen customization scheme, such as flooring object types. System 100 could determine that flooring 610-2 is of the flooring type based on the fact that it has one or more characteristics that are typical of that type of flooring. System 100 could also recognize the flooring 610-2-1. “Flooring 610-2-1 is the floor in conference room 602.”

“System 100 may, in some cases, perform object recognition operations to automatically identify known objects (e.g. conference table 606-1 and/or recognized objects (e.g. flooring 610-2-1)” However, in other cases, human assistance may be required (e.g. from user 704, or a technician who is responsible for modeling office space, etc.). This may be used to aid object identification in a specific dynamic volumetric modeling such as dynamic volumetric modelling 700.

FIG. 7B shows the “Custom immersive scene 702” (represented in FIG. 7B may be dynamically generated using dynamic volumetric model 700 in whatever way is necessary for a specific implementation. System 100, for example, may replace any of the objects in dynamic volumetric modeling 700 by one or more custom objects, depending on the chosen customization scheme (e.g. the medieval customization scheme). System 100 can replace original objects with custom items in any manner that is most appropriate for a given implementation. The replacement of original objects by custom objects can be done in many ways, as described in the co-pending U.S. Patent Application Ser. No. No. 15/141/717, whose contents were incorporated herein by reference and/or any other way. In some cases, custom objects can be used to replace original objects.

“Replacing a known object could, for example, mean replacing an instance of the object to be replaced with a custom object that is predetermined within the chosen customization scheme to match the object to which it is to be replaced. One example is conference table 606-1. This conference table, as shown in the above example, may be a known object. System 100 could have identified it to be an instance a particular brand and model of conference tables for which a custom object has been assigned under the medieval customization scheme. This conference table model and brand may be matched, in the customization scheme. For example, a rugged, long-handled table made of rusted iron cross strapping and rough wooden slats could appear to be made. System 100 may replace conference table 606-1 as part of dynamic generation of custom immersive scenes 702 with the rugged custom table created in the medieval customization plan. Cross-hatched shading illustrates various objects, such as conference table 606-1 and office chairs 606-2, beverages 606-3, and others. In accordance with the medieval customization plan, specific objects may be substituted in original immersive scene 600. For example, the table can be replaced with a rugged conference table 606-1 by office chairs 606-2. Additionally, beverages 606-4 can be replaced with rustic goblets that contain different colors of liquids.

“Another example is replacing an unknown object (e.g. one of recognized objects 601) with a custom object that corresponds to the known type. One example is flooring 610-2-1. System 100 might have identified it as being of the flooring type. A custom object has been assigned to the system within the medieval customization program. In the customization scheme, flooring objects may be matched with large stone-like tiles. System 100 can, therefore, replace the flooring 610-2-1 by the stone flooring as part of dynamic generation of custom immersive scenes 702. As shown in custom immersive scene 702, vertically lined shading highlights various objects, such as flooring 610-2-1, doors 610-1-1 and windows 610-4-1, 610-4-2, walls, 610-3, and others. In the original immersive scene 600, custom objects may be substituted according to the medieval customization scheme. In addition to the replacement of flooring 610-2-1 with stone flooring, wall 610-3 (not shown) can be replaced with stone walls. Door 610-1-1 could be replaced with a rugged door made of wood slats, iron cross strapping and windows 610-4-1 or 610-4-2 might be replaced.

“As stated above, certain unidentified objects (e.g. objects 608) in original immersive scene 600 cannot be replaced during dynamic generation of custom immersive scenes 702. Some objects, such as the supply counter 608-1 in original immersive scene 600, may not be recognized by system 100.

“In some cases, system 100 might identify (e.g. within dynamic volumetric modeling 700 of original immersive scene 600), one or more?nonaugmentable objects?” Original immersive scene 600. One or more of the non-augmentable items may correspond to predefined entries on a list of non-augmentable items associated with the chosen customization scheme. It may be desirable, for example, based upon user 704, safety regulations, privacy concerns, or other considerations. Certain objects cannot be customized, replaced, augmented or modified. For example, traffic lights and/or other indicators may not be compatible with a medieval customization scheme. However, they may still be safe to use. System 100 may not replace the non-augmentable objects by the custom objects.

“System 100 could identify one or more of the recognized or known objects in original immersive scene 600. However, the recognized or known objects may not match any custom objects in the customization scheme. The known or recognized objects can be treated as unknown objects and cannot be replaced in custom immersive scenes 702.

FIG. 7B: Unknown objects and other objects that are not replaced in custom immersive scene 702 will be illustrated without shading to indicate that they remain unchanged (e.g. supply counter 608-1). System 100 can perform a complete scene replacement. This means that every object, or any known or recognized object, is replaced within the custom immersive scene 702. This ensures an immersive experience. System 100, on the other hand, may only perform a partial replacement of an immersive scene. This allows for a more natural and authentic artificial reality experience.

In certain cases, an original immersive scene could be modelled by another system than system 100. In the case of virtual reality experiences, the original immersive scene could be modelled by a scene modeling software and sent to system 100. Or, the original immersive scene might be artificially generated instead of being a real-world scene. In augmented reality applications, the original immersive scene can be pre-scanned, and a volumetric model may be generated before a scene customization session begins.

Dynamic volumetric model 700 can be created incrementally or in real-time as sensors (e.g. sensors of user 704) capture new regions of original immersive scenes 600 that have never been previously captured and/or integrated into dynamic volumestric model 700. In certain cases, data representing a real-world scene around a user (e.g. original immersive scene 600 surrounding user 704) may be captured in real-time exclusively using one or more sensors from the artificial reality device. This allows the user to customize the scene.

“System 100 can incrementally create dynamic volumetric model 700 in any way that suits them. System 100, for example, may use a simultaneous mapping and localization algorithm to block out parts of the original immersive scene 600 when 704 comes across them. Volumetric data (e.g. 3D depth data etc.) may be captured exclusively by the artificial reality device. The artificial reality device’s sensors may capture volumetric data (e.g., 3D depth data) even though it may not be capable of capturing all aspects of specific objects simultaneously. The artificial reality device might include stereoscopic cameras that allow depth capture of the objects being modeled. Alternately, the artificial truth device could be set up to use a wobble from one camera, or the fact that one or several cameras or other devices is being repositioned at different locations and angles relative to the objects being modelled (e.g. user 704 moves within the original immersive scene 600). This will facilitate depth capture and help to develop the dynamic volumetric model.

“To illustrate, FIG. FIG. 8A illustrates additional elements of dynamic volumetric modeling 700 that can be added incrementally by user 704 as they encounter (perceive/detect, etc.). New regions can be added to original immersive scene 600. As shown in FIG. FIG. 8A shows that when door 610-1-1 opens to reveal hallway 604, an area 802 can be seen. This viewable region is what user 704 or his or her artificial reality device may be able to detect or perceive. The door could also open to hallway 604. As shown, dynamic volumetric model 700 may include a portion of floor 610-2 (i.e. the flooring in hallway 604) and water fountain 606-4.

“Custom immersive scenes 702 can be dynamically generated in accordance with the incremental evolution of dynamic volumetric modeling 700. FIG. 8B illustrates additional elements of custom immersive scene 702 dynamically generated and updated, as an example. FIG. 8B shows additional elements of custom immersive scenes 702 that are dynamically generated using the dynamic volumetric model 700. 8A. 8A. Door 610-1-2 can be replaced with a similar door to replace door 610-1-1, or a different type of door. Door 610-1-2 could be to a men?s bathroom. System 100 can determine this based on a symbol on the door. Door 610-1-2 may therefore be replaced by a special door that corresponds to men’s toilets in the medieval customization scheme. This could include a themed symbol that indicates the room. Water fountain 606-4 could be a well-known object. System 100 may replace it with another custom object that is more in keeping with the medieval customization scheme in custom immersive scenes 702 and 606.

FIG. 9A illustrates the incremental development and evolution of dynamic volumetric modeling 700. FIG. 9A shows additional elements that could be added to dynamic model 700 as user 704 leaves conference 602 and enters hallway 604. FIG. FIG. 9A shows that user 704 has seen all of conference room 602 as well as hallway 604 and all of the original immersive scene 600 objects. Dynamic volumetric model 700 may include 6 more. As shown in the diagram, dynamic volumetric modeling 700 could include all flooring 610-2, 608-2, 608-3, 608-3, 608-3, 608-3, window 609-4-3, doors 610-1 and 610-1, and other walls 610-3.

“As shown in FIG. 8B. Custom immersive scenes 702 can also dynamically generated. These custom scenes may be updated according to the incremental development dynamic volumetric model 700. To illustrate, FIG. FIG. 9B illustrates additional elements of custom immersive scenes 702 that are dynamically generated using the dynamic volumetric model 700. 9A. 9A. Plant 608-3, mail cart 608-2, and plant 608-3 are not known objects. System 100 may not replace them (i.e. as indicated by the absence of shading in FIG. 9B).”

“In some cases, system 100 might determine that an original immersive scenes includes a real-world scene in a scene library. This is associated with the chosen customization scheme. System 100 can dynamically create an immersive scene using the selected customization scheme. This is done by customizing an immersive scene according at least one of the preconfigured customizations that are included in the scene library.

“For example, user 704 may go to an office that has original immersive scene 600 every single day. System 100 could determine (e.g. when user 704 causes their artificial reality device or other device to start a new scene customization session), that the original immersive scene 600 is recorded in a scene library that corresponds to the customization scheme chosen by user 704. You can describe, detail, or document various pre-configured customizations within the scene library to allow original immersive scene 600 be customized in a specific way. User 704 might create, for example, preconfigured customizations that correspond to specific elements of original immersive scenes 600 by documenting them in the scene library. Although most of the floors and walls in the original immersive scene 600 office space are made of stone, 704 can create a preconfigured customization to the walls and floors of his or her office, to make them appear gold-plated. As an example, although most office chairs can be replaced with sturdy chairs made of iron and wood, 704 could create a preconfigured customization that makes a specific chair, such as a chair in their office, appear like a jewel-encrusted throne.

Preconfigured customizations can also be used by users to implement the other examples. A user might use a preconfigured customization, for example, to place a piece of art in a specific location in their home, to modify the design of a building or to modify any object they encounter in their daily lives.

“System 100 could identify an object for which a preconfigured customization has been assigned in any appropriate way. System 100, for example, may use location sensors (e.g. GPS sensors, pedometers and accelerometers), compasses and/or other suitable sensors to identify the real-world scene or part thereof that the user is viewing. This will allow it to determine if preconfigured customizations can be applied to objects in that real-world scene.

“In some cases, a user can share a customization plan with another user to allow both to experience the same immersive scene. To illustrate, FIG. FIG. 10 illustrates a scenario 1000 where a plurality users experience original immersive scene 600 together. FIG. 10 user 704 might be in conference room 602 and be participating in a scene customization session where the medieval customization scheme will be applied to original immersive scenes 600. While scene customization is in progress, user 1002 (e.g. another person using an artificial reality device) can enter original immersive scene 600 by entering conference room 602, as shown. The system 100, which includes the artificial reality device used to create the scene, may be used by 704 to determine that 1002 is in the original immersive scene 600. This system 100 could provide data representative of an immersive scene presented to user 704 (e.g. custom immersive scene 700) in response to user 1002’s determination that 1002 has entered original scene 600.

“The data representative for the custom immersive scenes may be configured to enable another dynamic generation and presentation of the custom scene to user 1002 through the artificial reality device of user 1002. If user 704 has chosen a medieval customization plan to provide an enhanced reality experience, the artificial truth device may detect user 1002’s presence and transmit data representative the medieval customization schema to user 1002 so that user 1002 can experience custom immersive scenes 702 the same way as user 704. User 1002 might accept an invitation to enter custom immersive scene 702. This would allow both users to see custom immersive scenes 702 and 704 in a way that makes them appear to be part of the medieval reality.

User 1002 could also decline to participate in custom immersive scene 702 together with user 704, and instead may initiate a scene customization session where a different customization scheme (e.g. a futuristic customization plan) is applied to original immersive scenes 600. User 1002 could appear to user 704, in a medieval world. User 704 may simultaneously appear to user 1002, in a futuristic world. Users 704 and 1002 actually reside in conference room 602 of the original immersive scene 600.

“System 100 could provide data that is representative of the custom immersive scene being presented user 704 in whatever way may be necessary to support a particular implementation. As an example, system 100 could send user 1002 an invitation to use the same customization scheme as user 704. System 100 could automatically create a “friend list”. Users located in the area of user 704 can be populated so that user 704 can select which users to send invitations. Other examples include users who may “enter?” Another example is that multiple users may?enter? a custom immersive scene that is associated with a specific customization scheme by signing into the scene customization session in a manner similar to conference call participants calling into virtual conference rooms. Another example is user 704, which has data about a first custom immersive scenes that he experienced (e.g. data representing a dynamic volumetric modeling and/or associated original or custom objects that could be replaced within the dynamic volumestric model during a scene customization session). This data may be shared with user 1004. This allows for the creation of a second custom immersive scenes based on the first. The second user may then experience it.

“FIG. 11. illustrates an example method 1100 to dynamically customize a scene for presentation. FIG. FIG. 11. illustrates an example operation according to one embodiment. Other embodiments could omit, add, reorder and/or modify any operations shown in FIG. 11. FIG. 11, may be performed by system 100 or any other implementation of it.”

“In operation 1102, a custom scene system can detect a user’s selection of a customization scheme that will be applied to an original immersive setting during a scene customization session. This scene customization session may be experienced by the user using an artificial reality device. Operation 1102 can be executed in any of these ways.

“In operation 1104, the scene customize system can dynamically create an immersive scene based upon the application of the selected customization (i.e. the customization scheme that was detected by the user in operation 1102) to an original immersive scene. Operation 1104 can be executed in any of these ways. FIG. 11. Operation 1104 can be performed by the scene customisation system performing operations 1106 to 1112 as described below. Operation 1104 can be carried out continuously in some cases by the scene customization system performing operations 1106 through 112.

“In operation 1106, data may be received by the scene customization system that is representative of the original immersive scenes. Operation 1106 can be executed in any of these ways.

“In operation 1108, the scene customisation system may create a dynamic volumetric model from the original immersive scenes based on the received data. Operation 1108 can be done in any of these ways.

“In operation 1110 the scene customization system might identify one or more objects within the original immersive scene. The scene customization system could identify, for example, one or more objects in the dynamic volumetric model that is part of the original immersive scenes. Operation 1110 can be done in any of these ways.

“In operation 1112 the scene customization system can replace one or more of the objects identified in operation 110 with one or several custom objects. Operation 1112 can be carried out in any of these ways. The scene customization system can replace one or more objects in the dynamic volumetric model, and it will be done according to the applied customization scheme.

“In operation 1114 the scene customization system might present the custom immersive scene via the artificial reality device. The scene customization system might present the custom immersive scene to the user during the scene customization session. Operation 1114 can be executed in any of these ways.

“FIG. 12. illustrates an example method 1200 to dynamically customize a scene for presentation. FIG. 12 illustrates exemplary operations according to one embodiment. Other embodiments may omit, add to, reorder and/or modify any of the operations shown in FIG. 12. FIG. 12. illustrates an example operation according to one embodiment. Other embodiments could omit, add, reorder and/or modify any operations shown in FIG. 12. FIG. System 100, or any other implementation thereof, may perform 12 of the operations shown in FIG.

“In operation 1202, an scene customization system might detect a user selecting a customization scheme from the library of customizations provided by an augmented realty provider. The customization scheme could be applied to a real scene around the user during a scene customization session. This can be done using an immersive headset device. Operation 1202 can be done in any of these ways.

Click here to view the patent on Google Patents.

How to Search for Patents

A patent search is the first step to getting your patent. You can do a google patent search or do a USPTO search. Patent-pending is the term for the product that has been covered by the patent application. You can search the public pair to find the patent application. After the patent office approves your application, you will be able to do a patent number look to locate the patent issued. Your product is now patentable. You can also use the USPTO search engine. See below for details. You can get help from a patent lawyer. Patents in the United States are granted by the US trademark and patent office or the United States Patent and Trademark office. This office also reviews trademark applications.

Are you interested in similar patents? These are the steps to follow:

1. Brainstorm terms to describe your invention, based on its purpose, composition, or use.

Write down a brief, but precise description of the invention. Don’t use generic terms such as “device”, “process,” or “system”. Consider synonyms for the terms you chose initially. Next, take note of important technical terms as well as keywords.

Use the questions below to help you identify keywords or concepts.

  • What is the purpose of the invention Is it a utilitarian device or an ornamental design?
  • Is invention a way to create something or perform a function? Is it a product?
  • What is the composition and function of the invention? What is the physical composition of the invention?
  • What’s the purpose of the invention
  • What are the technical terms and keywords used to describe an invention’s nature? A technical dictionary can help you locate the right terms.

2. These terms will allow you to search for relevant Cooperative Patent Classifications at Classification Search Tool. If you are unable to find the right classification for your invention, scan through the classification’s class Schemas (class schedules) and try again. If you don’t get any results from the Classification Text Search, you might consider substituting your words to describe your invention with synonyms.

3. Check the CPC Classification Definition for confirmation of the CPC classification you found. If the selected classification title has a blue box with a “D” at its left, the hyperlink will take you to a CPC classification description. CPC classification definitions will help you determine the applicable classification’s scope so that you can choose the most relevant. These definitions may also include search tips or other suggestions that could be helpful for further research.

4. The Patents Full-Text Database and the Image Database allow you to retrieve patent documents that include the CPC classification. By focusing on the abstracts and representative drawings, you can narrow down your search for the most relevant patent publications.

5. This selection of patent publications is the best to look at for any similarities to your invention. Pay attention to the claims and specification. Refer to the applicant and patent examiner for additional patents.

6. You can retrieve published patent applications that match the CPC classification you chose in Step 3. You can also use the same search strategy that you used in Step 4 to narrow your search results to only the most relevant patent applications by reviewing the abstracts and representative drawings for each page. Next, examine all published patent applications carefully, paying special attention to the claims, and other drawings.

7. You can search for additional US patent publications by keyword searching in AppFT or PatFT databases, as well as classification searching of patents not from the United States per below. Also, you can use web search engines to search non-patent literature disclosures about inventions. Here are some examples:

  • Add keywords to your search. Keyword searches may turn up documents that are not well-categorized or have missed classifications during Step 2. For example, US patent examiners often supplement their classification searches with keyword searches. Think about the use of technical engineering terminology rather than everyday words.
  • Search for foreign patents using the CPC classification. Then, re-run the search using international patent office search engines such as Espacenet, the European Patent Office’s worldwide patent publication database of over 130 million patent publications. Other national databases include:
  • Search non-patent literature. Inventions can be made public in many non-patent publications. It is recommended that you search journals, books, websites, technical catalogs, conference proceedings, and other print and electronic publications.

To review your search, you can hire a registered patent attorney to assist. A preliminary search will help one better prepare to talk about their invention and other related inventions with a professional patent attorney. In addition, the attorney will not spend too much time or money on patenting basics.

Download patent guide file – Click here