Software – Huanmi Yin, Feng Lin, Advanced New Technologies Co Ltd

Abstract for “Three-dimensional graphical user Interface for Informational Input in Virtual Reality Environment”

“Hand displacement data are received from sensing hardware. They are analyzed using a 3-D gesture recognition algorithm. The hand displacement data received is recognized as a 3D gesture. A 3D input graphical user Interface (GUI) is used to calculate the operational position of the 3D gesture. Select the virtual input element that is associated with the 3D interface GUI and corresponds to the calculated operational location. The selected virtual element contains the input information.

Background for “Three-dimensional graphical user Interface for Informational Input in Virtual Reality Environment”

Virtual Reality (VR) technology employs computer processing, graphics and a variety of user interfaces (for instance, visual display goggles or interactive controllers held in one hand) to create an immersive, user-perceived 3D environment (a “virtual world?”). Interactive capabilities. With the continued advancements in computing hardware, software, and VR technology, there are more applications of VR technology. VR technology is able to provide users with a realistic, lifelike virtual environment. However, traditional user interaction within the 3D environment has been difficult or awkward. This includes informational input (for instance, alphanumeric textual information). To improve the user experience with VR technology, it is important to speed up and make informational inputs more accessible.

The present disclosure describes a three-dimensional (3D), graphical user interface (GUI), for informational input in virtual reality (VR).

“In an implementation, hand displacement information is received from sensing equipment and analyzed with a three-dimensional (3D), gesture recognition algorithm. The hand displacement data received is interpreted as a 3D gesture. A 3D input graphical user Interface (GUI) is used to calculate the operational position of the 3D gesture. Select the virtual input element that is associated with the 3D interface GUI and corresponds to the calculated operational location. The selected virtual element contains the input information.

“Implementations can be made of the described subject matter using a computer, a nontransitory computer-readable medium that stores computer-readable instructions to execute the computerimplemented methods; and a computer, implemented system consisting of one or more computer memory device interoperably coupled together with one or several computers. The tangible, nontransitory media contains instructions that when executed by one or more machines, execute the computerimplemented methods/the nontransitory computer-readable media.

“The subject matter in this specification is possible to be implemented in specific implementations so that you can realize the following benefits. The first is that the described 3D user interfaces make it easier and faster to input information in a VR environment. The described 3D user interfaces, along with the provided hand-based gestures, make it easier and more intuitive to input information in a VR environment. The second is that informational input can be improved to enhance the user experience using VR technology. Informational input can be used to expand the use of VR technology in other scenarios. Third, overall user experience with VR technology can be improved. Others benefits will be obvious to those with ordinary skill in the arts.”

“The Claims and Detailed Description contain details about one or more implementations for the subject matter in this specification. The Claims and the accompanying drawings will reveal other features, aspects and benefits of the subject matter.

“DESCRIPTION of Drawings”

“FIG. “FIG.

“FIG. “FIG.

“FIG. “FIG.

“FIG. “FIG.

“FIG. “FIG.

“FIG. “FIG. 5. According to an implementation the present disclosure.

“FIG. “FIG.7” is a block diagram that illustrates an example of a computer-implemented program used to provide computational functionality associated with the described algorithms, methods and functions.

“Like references numbers and designations in various drawings indicate like components.”

The following description details three-dimensional (3D), graphical user interfaces (GUIs), for informational input within a virtual reality environment (VR). It is intended to allow anyone skilled in the art to use the disclosed subject matter in one or more specific implementations. There are many modifications, alterations and permutations to the disclosed implementations that can be made. These modifications and permutations will be easily apparent to anyone with ordinary skill in art. The general principles can also be applied to other applications and implementations without departing from this disclosure. Sometimes, details that are not necessary to understand the subject matter of the disclosed implementations can be left out to avoid obscured or unnecessary details. This is inasmuch if such details are within one’s skill level in the art. This disclosure is not limited to the illustrated or described implementations. It should be extended to encompass all aspects consistent with the described principles.

VR technology uses computer processing, graphics and various types of user interfaces (for instance, visual display goggles or interactive controllers held in one hand) to create an immersive, user-perceived, 3D environment (a?virtual universe?) Interactive capabilities. With the continued advancements in computing hardware, software, and VR technology, there are more applications of VR technology. VR technology is able to provide users with a realistic, lifelike virtual environment. However, traditional user interaction within the environment has been difficult or awkward. This includes informational input (e.g. alphanumeric textual data). To improve the user experience with VR technology, it is important to speed up and make informational inputs more accessible.

These 3D interfaces can be used to speed up information input in VR scenarios. The described 3D user interfaces with associated hand-based gestures may make it easier and more intuitive to input information in VR environments. This includes shopping, navigation through documents, and rotating virtual objects. The user experience can be improved by using informational input, which can allow VR technology to be used in more scenarios.

A VR terminal can be used to provide a user interface that allows a user to interact with VR technology. A VR terminal could include a VR headset that is worn on the head of the user. This VR headset can display data and graphics to provide a 3D immersive experience.

“The VR client terminal, which is a program that communicates with the VR terminal, provides graphics and other data. A VR client terminal produces a virtual reality scenario that is created by a developer. A VR terminal could be a sliding-type VR headset that allows for user input and other computing functions (for example, spatial orientation, movement detection and audio generation as well as user input and visual input). The mobile computing device may act as the VR client terminal in this case or as an interface to a separate VR client terminal (e.g. a PC-type laptop connected to the mobile computing devices).

To aid with information input, different types of data can be pre-configured to make it easier for users/access. A user could, for example, set a type of virtual currency and an amount to be used in VR shopping. The user can choose a single “Pay with Virtual Currency?” button to initiate payment. To initiate payment using the virtual currency, the user could click the?Pay with Virtual Currency? button in the VR environment. Conventional information input methods can be difficult and time-consuming when there is a lot of information to input or multiple actions to be performed in a specific scenario within the VR environment.

A user can take off the VR terminal by removing a mobile computer (e.g. a smartphone, tablet-type computer) from it to input information using the GUI provided by the mobile computing devices. Alternately, the user may connect a different computing device (e.g. a mobile computer or a PC) to input information.

An external input device, such as a joystick, a hand-held handle or any other device, can be used to control an operational focus in VR environments. This could be an icon, pointer or another graphical element. To input information, the user can move an operational focus to a location of a virtual object and then click, depress or slide a button or another element on the external input device to trigger the virtual element.

“In a third type of informational input, a timeout can be set and the user can control the position of the operational focus in the virtual environment using head posture or a head gesture. This is done by the VR terminal. The user can move the operational focus to the desired position for informational input by hovering over the virtual elements. After the operational focus has been maintained at the virtual element’s position for at least the duration of timeout, the virtual element can be selected and triggered.

In this example, a user must enter a payment password to complete a VR shopping experience. The first method requires the user to take out the VR terminal in order to complete information input. This interrupts the immersive VR experience and delays the payment process. The second technique requires that the VR client terminal be set up to communicate with an external device (such a joystick), which can increase hardware costs and add complexity to the user’s interaction with the virtual environment. The third technique requires that the user waits at least until the timeout expires before triggering each virtual element. This negatively affects the efficiency and speed of user input in VR environments.

This input method allows for fast and efficient input of information by users in VR environments. In a VR environment, a 3D input GUI recognizes a user’s pre-set 3D gesture. A virtual input element is created that corresponds to the calculated operational location. The virtual element that has been determined is chosen and the information associated with it is read for input. A series of 3D gestures are used to quickly input information into the VR environment.

“FIG. “FIG. The following description generally refers to method 100 within the context of other figures in this description. It will be clear that method 100 can be done by any system, environment and software. Or a combination of software and hardware as needed. Some implementations allow for multiple steps of method 100 to be executed simultaneously, in combination, in loops or in any order.

“At 102 hand displacement data from sensing hardware are received. Method 100 continues to 104 from 102.

“At 104 the hand displacement data received is analyzed using a 3D gesture recognition algorithm. Method 100 is used to analyze data starting at 104 and ending at 106.

“At 106, it is determined whether a 3D gesture has been recognized. Refer to FIG. A 3D gesture (refer to FIG. A 3D gesture is one that has depth information. It is recognized by the visual sensors attached to the VR terminal. Depth information is the coordinate information of the gesture relative to the Z-axis in the virtual reality scenario. A point in a three-dimensional space could be described as (X, Y and Z) using three-dimensional coordinates. A Z-axis coordinate value that corresponds to the point could be called depth information relative to an X and Y-axis. Method 100 is used to recognize a 3D gesture. Method 100 is used to recognize a 3D gesture.

“At 108, a 3D model of the recognized 3D gest is generated. Method 100 is initiated at 108 and proceeds to 110.

“At 110, the 3D gesture model can be compared to one or several preset 3D geste models. It is possible to preset particular 3D gestures that are applicable to certain 3D input GUIs. Method 100 is available starting at 110 and ending at 112.

“At 112, it is determined whether the recognized 3D gesture was a preset 3D gest based on the comparison at 110. Method 100 is used if the recognized gesture is a pre-set 3D gesture. Method 100 is reverted to 102 if it is found that the recognized gesture does not correspond to a preset 3D gesture. Method 100 is then used to return to 112.

“At 114, the operational position of the preset 3D gesture in relation to a 3D INTERNET GUI is calculated. The 3D input GUI is a 3D interface output for the VR scenario. It can be used to input information. Each virtual input element in the 3D input GUI may indicate a piece or combination of input information. In an example, the 3D input GUI could be a 3D virtual keypad (?3D airkey?). The virtual input elements could be multiple virtual?keys in this instance. Each virtual key represents an input character. The 3D virtual keyboard can be output to a 3D VR environment. This allows a user to quickly input information in a VR situation while wearing a VR terminal. The VR terminal allows the user to perform a specific 3D gesture that is associated with the VR scene in the VR environment. A virtual input element (a key) can be activated by the 3D gesture. Method 100 starts at 114 and ends at 116.

“At 116, the 3D input GUI selects a virtual input element that corresponds to the calculated operational location. Method 100 is initiated at 116 and proceeds to 118.

“At 118 input information that corresponds with the selected virtual input element are read. Method 100 is stopped after 118.

“In some cases, the technical solution involves three specific stages: 1) Creation of a VR scenario model, 2) Recognition of a 3D gesture and 3) Informational input.”

“1) Creation of a VR Scenario model”

A modeling tool, such as UNITY, 3DSMAX or PHOTOSHOP, can be used to create a VR scenario model in some cases. A modeling tool could be proprietary, open-source, commercial or proprietary. Or a combination of both.

In some cases, the VR scenario model can be derived from real-world scenarios. Advance photography can be used to obtain a texture map for a material or a planar model from a real situation. The modeling tool allows texture to be processed and a 3D model can be created. Some implementations allow the processing of the texture and the 3D model to be imported into a UNITY3D platform. Image rendering can then be done in multiple aspects (such a sound effect or GUI, a plug in, and lighting in UNITY3D platform). The VR scenario model is then populated with interaction software code. To allow rapid input of information in the VR scenario, a 3D input GUI can be made. Any input GUI that can be used with a VR scenario is included in the scope of this disclosure. This includes input GUIs of different dimensions, such as two-dimensional (2D), or four-dimensional (4D).

“FIG. “FIG. A 3D input GUI (for example, an?3D air keypad?) 202 is illustrated. Ten 3D virtual input elements (keys?) are included. For example, the virtual input element number 204 corresponds with the number “3”. In a user field of user 206, the 3D input GUI is displayed as part of a VR scene displayed in VR environment 215. The ten 3D virtual elements are tiled in two rows.

“In a VR scenario, the user can choose a specific virtual input element (for instance, virtual input elements 204) by using a 3D gesture. This will allow them to enter digital information, such as a password, in a particular VR environment. FIG. 2 shows an example of this gesture. FIG. 2 shows that the user executes a 3D gesture (a one-finger click/tap at position 214) in the VR environment. Position 214 corresponds with the virtual input element (204, the?3? key) at position 210 in VR environment 210. Position 214 corresponds to the location of virtual input element 204 (the?3? key) in VR environment 210. The input of the digit is completed by selecting virtual input element 204.

“FIG. 3 illustrates a different 3D input GUI to provide informational input in VR environments, according an implementation of this disclosure. A 3D input GUI 302 is illustrated. It could be a digital magic cube keyboard, or a 3D digital wizard cube keyboard. Includes 27 3D virtual input elements. For example, the virtual input element 302 corresponds to the number “3?”. In a VR environment 310, the 3D input GUI 302 can be seen in a user 306. The 27 3D virtual input elements in a cube are displayed in this VR scenario. Some implementations allow the display of digits or other data, such as images or symbols, in specific patterns or randomly.

“In a VR scenario, the user 308 may select a specific virtual input element (for instance, virtual input element 304) by using a 3D gesture (such a one finger click/tap), and complete digital information, such as a password, using that particular 3D gesture. Other 3D gestures can be used to access virtual input elements. FIG. FIG. 3 shows how the user performs a 3D gesture 312 (“a two-finger turn”) at position 314 within the VR environment. The position 314 corresponds to the location of virtual input element (the?3?) 304. The position 314 corresponds to the location of virtual input element 304 (the?3? key) in VR environment 310. When 3D gesture 312 has been performed, the cube of virtual elements will rotate in a clockwise direction. This allows for other virtual input elements to also be selected.

“The 3D input GUIs shown in FIGS. will be readily understood by anyone with ordinary skill in art. 3 and 2 are examples of possible 3D input GUIs. FIGS. FIGS. 2 and 3 do not limit the disclosure in anyway. Other 3D input GUIs compatible with this disclosure are also included in the scope of this disclosure.

After completing a VR scenario modeling (as described previously), and modeling an associated 3D input GUI, the VR client terminal can be used by a user to output the VR scenario model to a VR client terminal connected to the VR client terminal.

In some cases, the VR client terminal may only output the VR scenario model to users in the VR environment. The VR client terminal can output 3D input GUIs to the VR environment when the user is required to input information in a VR scenario. This happens in response to a user’s triggering preset 3D gesture (such as a virtual button) in the VR environment.

A VR terminal can trigger a virtual trigger element that displays 3D input GUI. A user might take part in a VR shopping experience where many commodities are available for them to choose from. The user can choose a ‘buy/pay? option to complete payment for the selected commodity. Virtual buttons can be used in VR environments by using a preset 3D gesture. The VR client terminal outputs the 3D input GUI. For use in VR scenarios, the VR client terminal outputs a virtual keyboard. You can trigger input by repeating 3D gestures with the “3D checkout counter”.

“2) Recognition of 3D Gestures”

In some cases, the VR terminal can be worn by the user to interact with the VR scenario. The user does not need to use any auxiliary hardware devices such as haptic gloves or joysticks. Instead, they can create 3D gestures while holding their hands up in midair. The 3D gestures that are formed in VR environments (similar to augmented realities) are recorded, analyzed and displayed. Other implementations allow the user to use auxiliary hardware devices in order to control an operational focus, make selections or perform other actions that are consistent with this disclosure.

A user can select a virtual element by performing a 3D gesture (e.g., a button, an operable control or page) in a VR scenario. The disclosure does not limit itself to 3D gestures illustrated in the art. It can be used to select any gesture (3D and other) that is consistent with the disclosure. A 3D gesture, for example, can be simple (such a click/tap gesture), or more complex (such a gesture to select a virtual element). 3D gestures may be static (such a hand position) or dynamic (such a gripping, dragging and rotation). 3D gestures may be modified to meet specific requirements or differentiate between VR content providers.

“FIG. “FIG. FIG. 4 illustrates example 3D gestures:”

“404” can be used to indicate a GUI element in a VR environment. To provide information to users about 3D gestures that are applicable to this VR scenario, the GUI element 404 may be displayed.

“Some implementations of VR client terminals can recognize, using sensing hardware coupled with the VR terminal and in conjunction with a preset gesture recognition algorithm, a user’s 3D gesture in VR scenarios. The VR client terminal can monitor the user’s hand movement in real time using the sensing hardware. The sensing hardware can collect and transmit hand displacement data (for example, position coordinates and a movement track) to the VR client terminal for analysis/processing.”

“In some cases, the sensing hardware may include one or more sensors that collect image-related non-visual, visual (such as infrared, ultraviolet), radar, and ultrasonic data. A dual-camera, multi-camera, time-of-flight (TOF), structured light, or micro-radar solution can all be used to recognize a 3D gesture.

The sensing hardware may include one or more image sensors for 3D gesture recognition using the dual-camera binocular imaging system. The VR client terminal can monitor the hand displacement of the user using a dual camera image sensor. It also collects hand displacement data for analysis and processing. The VR client terminal calculates the hand displacement data using the 3D gesture recognition algorithm. It then calculates the rotation quantity, offset and relative position of the hand to the VR scenario. To create a final 3D gesture model, the VR client terminal uses the offset and calculated rotation quantity.

An infrared sensor can be added to the sensing hardware for 3D gesture recognition using the TOF solution. The sensing hardware may include a laser sensor for 3D gesture recognition using the structured light solution. The sensing hardware may include a radar sensor for 3D gesture recognition using the micro-radar method. The typical implementations of 3D gesture recognition based on the TOF solution and structured light solutions, as well as the micro radar solution, are identical in principle to the dual-camera binary imaging solution. To allow 3D gesture modeling, depth information is calculated in each of these solutions (e.g., using rotation quantity and offset based upon hand displacement data).

The VR client terminal will receive the hand displacement data and can then analyze it using a preset 3D gest recognition algorithm to determine if a 3D gesture has been recognized.

“A 3D gesture can be recognized if it is a recognized gesture.

Based on hand displacement data, some implementations can recognize a 3D gesture and calculate a rotation quantity. The rotation quantity relative to the Z-axes may refer to a deviation angle where a pre-selected feature on the hand is rotated with respect the Z/Y/Z axes of the VR scenario. This serves as a rotation direction when the user’s hand performs a specific 3D gesture. The offset may refer to a horizontal distance between the selected feature point on the user’s hand and the X/Y/Z axes of VR when the user uses the 3D gesture.

The VR client terminal can perform 3D modeling of the gesture based on the offset and rotation quantities. Recognition of the 3D gesture can be completed once a 3D model of the gesture (also known as a 3D gesture model) has been created.

“Combining rotation quantity with offset allows accurate determination of depth information associated to the performance of the 3D gesture performed by the user’s hand.” The depth information determined can be used to 3D model the gesture.

The VR client terminal can define a preset 3D gesture to select particular elements within a VR scene. The VR client terminal recognizes 3D gestures and can determine if it is a preset 3D gesture. The generated 3D gesture model can then be compared to 3D gesture models that are associated with pre-defined 3D gestures (e.g., FIG. 4). 4

“When wearing the VR terminal, the user’s vision is usually restricted. A user can’t see a 3D gesture made with their hand. This can affect the position and accuracy of 3D gestures associated with the user’s hands. The VR client terminal can output an operating focus in the VR environment relative to the VR scenario. This will mitigate these issues. The user’s 3D gesture is used to determine the operational focus and the spatial displacement. The VR client terminal can calculate in real time the operational position of the user’s hand for display in the VR environment. The hand displacement data is used to calculate the operational position. This data is collected using the sensing hardware that tracks the hand of the user in real time. This allows the user to display and move in VR the real-world 3D gestures.

“In some cases, the operational center can be an icon, pointer or other graphical element. For example, a representation for a human hand. To simulate a 3D gesture, the operational focus can be animated. The VR client terminal can generate a 3D model of a user’s gesture using 3D gesture modeling. This is done based on depth information about the user’s hand. The 3D gesture modeling can render a 3D animation of the associated hand parameters, such as position coordinates and displacement changes. To correspond with the synchronous spatial displacement of a user’s hand, the rendered 3D gesture animation may be used as an operational focus.

The user can observe in VR the 3D gestures made by their hands. The VR client terminal may visually prompt the user (for example with animated animations) to modify or change their 3D gesture to conform to one of several preset 3D gestures. This allows the user to correct the formation of a 3D geste and reduces the chance of an inadvertent or incorrect input.

“In some cases, the VR client terminal may be able to store pre-made 3D gesture models that can be matched against the 3D gesture model generated by the user. The VR client terminal recognizes the 3D gesture and can choose several feature points from both the generated 3D model and the preset 3D gest model. To determine whether the 3D gesture of the user matches the preset 3D gesture, the VR client terminal performs matching operations with the selected feature points. The VR client terminal can use a preset threshold to determine if there is an exact match between the 3D gesture models and the preset 3D geste model. If the preset threshold value is exceeded or met, then the generated 3D gesture and preset 3D gest model can be considered match. The VR client terminal can execute a preset similarity algorithm to determine if the match is possible in some cases. You can set the preset threshold value to different values depending on your VR scenario. For example, actions that require fine-grained selections might have a preset limit with a high value while general rotation operations could have a preset limit with a lower value.

“3) Informational input”

The VR client terminal will determine if the 3D gesture is a preset 3D gesture. It can then calculate the VR environment’s operational position and select the VR element that corresponds to that operational position. The VR client terminal can, for example, calculate the operational position of the user’s hand in the VR environment based on the hand displacement data that is collected by the sensor hardware. This data is obtained by tracking the hand of a user in real-time. The VR client terminal can then search the VR environment for the appropriate virtual element and select it after calculating the operational location.

The VR client terminal can locate the operational focus and determine the operational position for the hand of the user within the VR environment if it is already visible in the VR environment. The VR client terminal can then search for the virtual element indicated by the operational focal and select it.

In some cases, to minimize the risk of operational failures due to a prolonged duration associated with a 3D gesture by the user, the VR terminal can establish a preset duration threshold that will indicate whether the gesture is valid. The VR client terminal recognizes the 3D gesture and calculates the overall duration. This is the time between the moment the gesture was initiated by the user and the moment it is completed. It then determines whether the duration is less than the preset duration threshold. The 3D gesture will be considered valid if it lasts for less than the preset duration threshold. The 3D gesture will be invalid if it lasts longer than the preset duration threshold. The VR client terminal can determine if the 3D gesture has been validated.

The preset duration threshold value can be adjusted to suit specific VR scenarios or VR-related operations. The preset duration threshold can be shortened to two seconds, for example, so that users can quickly form 3D gestures to select virtual elements in VR scenarios.

Is it possible to assume that a preset virtual trigger elements is available in advance in VR scenarios that allows a user trigger the VR client terminal so that they can output a preset 3D GUI? If the user needs to input information in VR scenarios, the user can move the operational focus to the location of the virtual trigger elements to trigger them. This is done by keeping the operational focal point above the virtual trigger elements. The user can use an external device to control the movement and position of the operational focal point, such as a joystick, or a handle. A gravity sensor can also be pre-installed onto the VR terminal. Or, the user can use a corresponding gravity sensing device on their hand to control the movement.

The user can perform a preset 3D gesture by placing a hand in the indicated position of the operational focus. The VR client terminal will recognize that the 3D gesture the user made was a preset 3D gesture. If the operational focus remains above the virtual trigger, the virtual element indicated in the operational focus will be considered the virtual trigger for actuation. The VR client terminal has the ability to select the virtual trigger element and trigger the VR scenario to produce the 3D input GUI.

The VR terminal displays a 3D input GUI. Users can then input information using virtual input elements that are associated with the 3D output GUI. The user can set the operational focus to hover over a specific virtual input element and then use a 3D gesture with the hand to select that virtual input element. Some implementations allow for a preset 3D gesture to select the virtual trigger element, which can be different from the preset 3D gesture that is used in the 3D input GUI.

After confirming that the 3D gesture used by the user is a pre-set gesture, the VR client terminal calculates the operational position of the 3D gesture relative to the VR scenario. The VR client terminal then can search the 3D GUI for the virtual input element that corresponds to the operational position and choose the element. The VR client terminal may highlight the virtual input element within the 3D input GUI in some cases. The 3D input GUI can highlight the selected virtual input element by using flashing or a different color.

The VR client terminal can then read the information associated with the virtual input element by selecting it in the 3D GUI. You can use the read information for informational input. To input additional information, the user can choose to select virtual input elements (e.g. sequences of alphanumeric characters that form a password). If a payment password has been entered, the purchase of a particular commodity can be made after it is verified by a payment server.

“FIG. “FIG.5” is a block diagram that illustrates an example of 500 computing modules used to input operations for a VR client terminal. This diagram was created according to the implementation of the present disclosure. A VR scenario is a virtual reality client terminal that outputs a VR environment to a VR terminal. A 3D input GUI is part of the VR scenario. Some implementations include a preset virtual trigger that triggers the output of the 3D input GUI. The 3D input GUI may be a 3D virtual keypad or a 3D virtual keyboard. In some cases, the virtual input element could be a virtual key that is associated with the 3D keyboard. An operational focus corresponds to a 3D gesture. The VR scenario also includes synchronized displacement with the 3D gesture. Some implementations display the operational focus as animation that simulates the 3D gesture.

“A recognition module 501 can recognize 3D gestures of users in VR scenarios. The recognition module 501 can also be used to: 1) track the user’s hand displacement using preset sensing equipment; 2) collect hand displacement data from the user. 3) Calculate a rotation quantity, and an offset for the user’s hand with respect to the X/Y/Z VR scenario based on hand displacement data; 4) perform 3D modeling based upon the offset and rotation quantities to get the appropriate 3D gesture. When the 3D gesture of a user is recognized, a judgment module 502 can determine if the gesture is a preset 3D one. If the 3D gesture recognized is the preset 3D gest, a calculation module 503 can calculate the operational position of the 3D gesture that corresponds to the 3D input GUI. The calculation module 503 can also be configured to: 1) Calculate the duration for the 3D geste before computing the operational position for the 3D input Gui; 2) assess whether the duration is less than a preset threshold; 3) perform the operation to calculate the operational position for the 3D geste corresponding to 3D input GUI. If the duration is shorter than the preset threshold, then the calculation module 503 will execute the operation to calculate the 3D position of that 3D gesture. A virtual input module 504 can be used to select an element that is associated with the 3D interface and corresponds to the calculated operational location. The input module 504 can also read the informational input provided by the virtual input elements. The input module 504 can be configured to highlight the virtual input element selected in the 3D input GUI. A 505 output module that outputs the 3D input GUI at the position where the virtual trigger elements are located.

“FIG. “FIG. 5. According to the implementation of this disclosure. FIG. FIG. 6 shows the VR client terminal. It includes a central processing device (CPU), a computer memory 604, and a nonvolatile storage unit 606, as well as a network interface 610 and an internal bus 612. FIG. FIG. 6 may include or be included by the computing systems illustrated in FIG. 7. Some components shown in FIGS. may not be used in some implementations. 6. and 7. can be considered the same (for instance, CPU 602/Processor 705, Memory 604/Memory 707). In some implementations, the computing modules 500 used for input operations associated with a VR client terminal can be loaded, stored, and executed by CPU 602 in Memory 604 (as a software-hardware-combined logic computing system).”

After considering the details and the practice of the subject matter, ordinary skilled in the art should be able to imagine other implementations of this disclosure. The present disclosure covers all variations, uses, or adaptations to the subject matter described. This includes variations, usages or adaptations that are consistent with the general principles and common knowledge as well as non-disclosed technical methods in the art. These examples are provided to aid understanding of the concepts and do not limit the scope of the disclosure.

“It is important to understand that the disclosure does not limit itself to the described implementations, or the illustrations in the accompanying drawings. You can make many modifications or changes without affecting the scope of this application. Only the claims included in the present application limit the scope of the application. Modifications, equivalent replacements or improvements made in accordance with the spirit and principles described are included within the protection scope of this disclosure.

“FIG. “FIG.7 is a block diagram that illustrates an example of a computer implemented system 700 used for computing functionalities related to described algorithms, methods and functions. System 700 is illustrated with a computer 702 as well as a network 730.

The illustrated computer 702 can include any computing device including a computer that is a server, desktop, laptop/notebook computer or wireless data port, smart device, personal data assistant (PDA), tablet computing gadget, one or more processors within these devices, another device or a combination thereof, including physical and virtual instances of the computing devices or a combination physical and virtual instances. The computer 702 may also include an input device such as a keyboard, touch screen, keypad or another input device. An output device that transmits information about the operation of the computer 702, which can include digital data, visual, audio or any combination thereof, can be included in the computer 702.

“The computer 702 may serve as a client, server, network component, server, database, persistency, or any combination thereof in a distributed computing environment. The computer illustrated 702 can be communicably connected to a network 730. In some implementations, one or more components of the computer 702 can be configured to operate within an environment, including cloud-computing-based, local, global, another environment, or a combination of environments.”

“A computer 702 at a high level is an electronic computing device that can receive, transmit and process data, store or manage information related to the subject matter. Some implementations allow the computer 702 to be communicably coupled or include a server such as an application server or web server, caching or streaming data server, or any combination thereof.

The computer 702 can receive and process requests via network 730, such as from client software applications running on another computer 702, and then respond to those requests by using software or a combination thereof. Requests can be sent to the 702 by internal users, such as from a command console, another internal access method, external or third-parties or entities, individuals or systems.

“Each component of the computer 702 is able to communicate with the system bus 703. Some implementations allow any component of the computer 702, including software and hardware, to communicate over a system bus 703. API 712 specifications can contain data structures, routines, and object classes. The API 712 may be computer-language dependent or independent and can refer to a complete interface or a single function or a whole set of APIs. The service layer 713 provides software services for the computer 702 and other components, whether illustrated or unillustrated, that can be communicably coupled with the computer 702. All service consumers can access the functionality of the computer 702 via this service layer. The service layer 713 provides software services that provide defined, reusable functionalities via a defined interface. The interface could be written in JAVA or C++ or any combination of computing languages that provide data in extensible Markup Language (XML), another format, or a combination thereof. Although the API 712 and the service layer 713 are shown as integrated components of the computer 702, other implementations may show them as separate components. Furthermore, the API 712 and the service layer 713 may be implemented as children or sub-modules of other software modules, enterprise applications, or hardware modules without departing completely from the scope the present disclosure.

“The computer 702 has an interface 704. In FIG. 7 shows one interface 704 in FIG. The computer 702 uses the interface 704 to communicate with other computing systems (illustrated or not), which are communicatively linked with the network 730 in distributed environments. The interface 704 can communicate with the network 730 in general. It may contain logic encoded in either software or hardware or a combination of both software and hardware. The interface 704 may include software that supports one or more communication protocols. This means that the interface 730, or interface’s hardware, can communicate physical signals inside and outside the illustrated computer 702.

“The computer 702 contains a processor 705. In FIG. 7, the processor 705 is shown as a single processor. 7. Two or more processors may be used depending on specific needs or desires. The processor 705 generally executes instructions and manipulates data in order to perform the operations of computer 702 as well any algorithms, methods functions, flows, or procedures described in this disclosure.

“The computer 702 also contains a database 706, which can store data for another component communicatively connected to the network 730 (whether it is illustrated or not). Or a combination of the 702 and another component. Database 706 could be either an in-memory or conventional database, as well as any other type of database that stores data consistent with this disclosure. Database 706 may be one of several database types, such as an in-memory, conventional, or hybrid database, depending on the particular requirements, wishes, or implementations of the computer 702 or the described functionality. FIG. 706 shows one database, but it can be used as many databases as you need. 7 can be used to create different databases according to specific needs and desires. Database 706 is shown as an integral part of the computer 702, but alternative implementations allow database 706 to be used.

“The computer 702 also contains a memory 707 which can store data for another component or components communicatively connected to the network 730 (whether shown or not). Or a combination of the 702 and another part. Any data compatible with the present disclosure can be stored in memory 707. Memory 707 may store any data consistent with the present disclosure in some implementations. Although shown in FIG. 707 as one memory, it can be used by multiple memories. FIG. 7 shows one memory 707 but two or more memories 707 can be used depending on specific needs and desires. Memory 707 is shown as an integral part of the computer 702, but alternative implementations allow memory 707 to be used outside the computer 702.

“The application 708 is an algorithmic, software engine that provides functionality according to specific needs, desires or particular implementations for the computer 702, especially with respect to the functionality described in this disclosure. Application 708, for example, can be used as one or more modules, components, or applications. Additionally, the application 708 is not limited to a single application 708, but can be used as multiple applications 708 on a computer 702. The application 708 is not required to be installed on the computer 702, but alternative implementations allow it to be added to the 702.

“The computer 702 may also have a power supply 714. The power supply 714 can include a rechargeable or non-rechargeable battery that can be configured to be either user- or non-user-replaceable. The power supply 714 may include power-conversion and management circuits. This includes standby, recharging, and other power management functions. The power supply 714 may include a plug that allows the computer 702’s power source to be plugged into a wall socket.

“Any number of computers 702 can be associated with or external to a computer system containing computer 702, each computer 702 communicating via network 730. The term “client” is also used. ?user,? ?user,?, or any other appropriate terminology may be interchanged, as long as they do not depart from the scope and meaning of the present disclosure. The present disclosure also allows multiple users to use the same computer 702 or one user can use multiple computers 702.

“Described implementations can include one or more features alone or in combination.”

“For example, in the first implementation, a computer implemented method that includes: receiving hand displacement information from sensing hardware; analysing the received hand displacement dataset using a three-dimensional (3D), gesture recognition algorithm; recognizing that the 3D gesture is a 3D gesture, computing an operational position for the gesture relative to a 3D user interface (GUI); selecting the virtual input element associated to the 3D GUI and corresponding with the calculate operational location; and reading the input information correspondant to the selected virtual element.

“The described implementations of the foregoing can optionally include one or more features:

“A first feature that can be combined with any of these features where the sensing hardware comprises one or more image sensor, laser sensor, or radar sensors.”

“A second feature that can be combined with any of the preceding or following features where the sensing hardware comprises one or more image sensor, laser sensor, or radar sensors.”

Summary for “Three-dimensional graphical user Interface for Informational Input in Virtual Reality Environment”

Virtual Reality (VR) technology employs computer processing, graphics and a variety of user interfaces (for instance, visual display goggles or interactive controllers held in one hand) to create an immersive, user-perceived 3D environment (a “virtual world?”). Interactive capabilities. With the continued advancements in computing hardware, software, and VR technology, there are more applications of VR technology. VR technology is able to provide users with a realistic, lifelike virtual environment. However, traditional user interaction within the 3D environment has been difficult or awkward. This includes informational input (for instance, alphanumeric textual information). To improve the user experience with VR technology, it is important to speed up and make informational inputs more accessible.

The present disclosure describes a three-dimensional (3D), graphical user interface (GUI), for informational input in virtual reality (VR).

“In an implementation, hand displacement information is received from sensing equipment and analyzed with a three-dimensional (3D), gesture recognition algorithm. The hand displacement data received is interpreted as a 3D gesture. A 3D input graphical user Interface (GUI) is used to calculate the operational position of the 3D gesture. Select the virtual input element that is associated with the 3D interface GUI and corresponds to the calculated operational location. The selected virtual element contains the input information.

“Implementations can be made of the described subject matter using a computer, a nontransitory computer-readable medium that stores computer-readable instructions to execute the computerimplemented methods; and a computer, implemented system consisting of one or more computer memory device interoperably coupled together with one or several computers. The tangible, nontransitory media contains instructions that when executed by one or more machines, execute the computerimplemented methods/the nontransitory computer-readable media.

“The subject matter in this specification is possible to be implemented in specific implementations so that you can realize the following benefits. The first is that the described 3D user interfaces make it easier and faster to input information in a VR environment. The described 3D user interfaces, along with the provided hand-based gestures, make it easier and more intuitive to input information in a VR environment. The second is that informational input can be improved to enhance the user experience using VR technology. Informational input can be used to expand the use of VR technology in other scenarios. Third, overall user experience with VR technology can be improved. Others benefits will be obvious to those with ordinary skill in the arts.”

“The Claims and Detailed Description contain details about one or more implementations for the subject matter in this specification. The Claims and the accompanying drawings will reveal other features, aspects and benefits of the subject matter.

“DESCRIPTION of Drawings”

“FIG. “FIG.

“FIG. “FIG.

“FIG. “FIG.

“FIG. “FIG.

“FIG. “FIG.

“FIG. “FIG. 5. According to an implementation the present disclosure.

“FIG. “FIG.7” is a block diagram that illustrates an example of a computer-implemented program used to provide computational functionality associated with the described algorithms, methods and functions.

“Like references numbers and designations in various drawings indicate like components.”

The following description details three-dimensional (3D), graphical user interfaces (GUIs), for informational input within a virtual reality environment (VR). It is intended to allow anyone skilled in the art to use the disclosed subject matter in one or more specific implementations. There are many modifications, alterations and permutations to the disclosed implementations that can be made. These modifications and permutations will be easily apparent to anyone with ordinary skill in art. The general principles can also be applied to other applications and implementations without departing from this disclosure. Sometimes, details that are not necessary to understand the subject matter of the disclosed implementations can be left out to avoid obscured or unnecessary details. This is inasmuch if such details are within one’s skill level in the art. This disclosure is not limited to the illustrated or described implementations. It should be extended to encompass all aspects consistent with the described principles.

VR technology uses computer processing, graphics and various types of user interfaces (for instance, visual display goggles or interactive controllers held in one hand) to create an immersive, user-perceived, 3D environment (a?virtual universe?) Interactive capabilities. With the continued advancements in computing hardware, software, and VR technology, there are more applications of VR technology. VR technology is able to provide users with a realistic, lifelike virtual environment. However, traditional user interaction within the environment has been difficult or awkward. This includes informational input (e.g. alphanumeric textual data). To improve the user experience with VR technology, it is important to speed up and make informational inputs more accessible.

These 3D interfaces can be used to speed up information input in VR scenarios. The described 3D user interfaces with associated hand-based gestures may make it easier and more intuitive to input information in VR environments. This includes shopping, navigation through documents, and rotating virtual objects. The user experience can be improved by using informational input, which can allow VR technology to be used in more scenarios.

A VR terminal can be used to provide a user interface that allows a user to interact with VR technology. A VR terminal could include a VR headset that is worn on the head of the user. This VR headset can display data and graphics to provide a 3D immersive experience.

“The VR client terminal, which is a program that communicates with the VR terminal, provides graphics and other data. A VR client terminal produces a virtual reality scenario that is created by a developer. A VR terminal could be a sliding-type VR headset that allows for user input and other computing functions (for example, spatial orientation, movement detection and audio generation as well as user input and visual input). The mobile computing device may act as the VR client terminal in this case or as an interface to a separate VR client terminal (e.g. a PC-type laptop connected to the mobile computing devices).

To aid with information input, different types of data can be pre-configured to make it easier for users/access. A user could, for example, set a type of virtual currency and an amount to be used in VR shopping. The user can choose a single “Pay with Virtual Currency?” button to initiate payment. To initiate payment using the virtual currency, the user could click the?Pay with Virtual Currency? button in the VR environment. Conventional information input methods can be difficult and time-consuming when there is a lot of information to input or multiple actions to be performed in a specific scenario within the VR environment.

A user can take off the VR terminal by removing a mobile computer (e.g. a smartphone, tablet-type computer) from it to input information using the GUI provided by the mobile computing devices. Alternately, the user may connect a different computing device (e.g. a mobile computer or a PC) to input information.

An external input device, such as a joystick, a hand-held handle or any other device, can be used to control an operational focus in VR environments. This could be an icon, pointer or another graphical element. To input information, the user can move an operational focus to a location of a virtual object and then click, depress or slide a button or another element on the external input device to trigger the virtual element.

“In a third type of informational input, a timeout can be set and the user can control the position of the operational focus in the virtual environment using head posture or a head gesture. This is done by the VR terminal. The user can move the operational focus to the desired position for informational input by hovering over the virtual elements. After the operational focus has been maintained at the virtual element’s position for at least the duration of timeout, the virtual element can be selected and triggered.

In this example, a user must enter a payment password to complete a VR shopping experience. The first method requires the user to take out the VR terminal in order to complete information input. This interrupts the immersive VR experience and delays the payment process. The second technique requires that the VR client terminal be set up to communicate with an external device (such a joystick), which can increase hardware costs and add complexity to the user’s interaction with the virtual environment. The third technique requires that the user waits at least until the timeout expires before triggering each virtual element. This negatively affects the efficiency and speed of user input in VR environments.

This input method allows for fast and efficient input of information by users in VR environments. In a VR environment, a 3D input GUI recognizes a user’s pre-set 3D gesture. A virtual input element is created that corresponds to the calculated operational location. The virtual element that has been determined is chosen and the information associated with it is read for input. A series of 3D gestures are used to quickly input information into the VR environment.

“FIG. “FIG. The following description generally refers to method 100 within the context of other figures in this description. It will be clear that method 100 can be done by any system, environment and software. Or a combination of software and hardware as needed. Some implementations allow for multiple steps of method 100 to be executed simultaneously, in combination, in loops or in any order.

“At 102 hand displacement data from sensing hardware are received. Method 100 continues to 104 from 102.

“At 104 the hand displacement data received is analyzed using a 3D gesture recognition algorithm. Method 100 is used to analyze data starting at 104 and ending at 106.

“At 106, it is determined whether a 3D gesture has been recognized. Refer to FIG. A 3D gesture (refer to FIG. A 3D gesture is one that has depth information. It is recognized by the visual sensors attached to the VR terminal. Depth information is the coordinate information of the gesture relative to the Z-axis in the virtual reality scenario. A point in a three-dimensional space could be described as (X, Y and Z) using three-dimensional coordinates. A Z-axis coordinate value that corresponds to the point could be called depth information relative to an X and Y-axis. Method 100 is used to recognize a 3D gesture. Method 100 is used to recognize a 3D gesture.

“At 108, a 3D model of the recognized 3D gest is generated. Method 100 is initiated at 108 and proceeds to 110.

“At 110, the 3D gesture model can be compared to one or several preset 3D geste models. It is possible to preset particular 3D gestures that are applicable to certain 3D input GUIs. Method 100 is available starting at 110 and ending at 112.

“At 112, it is determined whether the recognized 3D gesture was a preset 3D gest based on the comparison at 110. Method 100 is used if the recognized gesture is a pre-set 3D gesture. Method 100 is reverted to 102 if it is found that the recognized gesture does not correspond to a preset 3D gesture. Method 100 is then used to return to 112.

“At 114, the operational position of the preset 3D gesture in relation to a 3D INTERNET GUI is calculated. The 3D input GUI is a 3D interface output for the VR scenario. It can be used to input information. Each virtual input element in the 3D input GUI may indicate a piece or combination of input information. In an example, the 3D input GUI could be a 3D virtual keypad (?3D airkey?). The virtual input elements could be multiple virtual?keys in this instance. Each virtual key represents an input character. The 3D virtual keyboard can be output to a 3D VR environment. This allows a user to quickly input information in a VR situation while wearing a VR terminal. The VR terminal allows the user to perform a specific 3D gesture that is associated with the VR scene in the VR environment. A virtual input element (a key) can be activated by the 3D gesture. Method 100 starts at 114 and ends at 116.

“At 116, the 3D input GUI selects a virtual input element that corresponds to the calculated operational location. Method 100 is initiated at 116 and proceeds to 118.

“At 118 input information that corresponds with the selected virtual input element are read. Method 100 is stopped after 118.

“In some cases, the technical solution involves three specific stages: 1) Creation of a VR scenario model, 2) Recognition of a 3D gesture and 3) Informational input.”

“1) Creation of a VR Scenario model”

A modeling tool, such as UNITY, 3DSMAX or PHOTOSHOP, can be used to create a VR scenario model in some cases. A modeling tool could be proprietary, open-source, commercial or proprietary. Or a combination of both.

In some cases, the VR scenario model can be derived from real-world scenarios. Advance photography can be used to obtain a texture map for a material or a planar model from a real situation. The modeling tool allows texture to be processed and a 3D model can be created. Some implementations allow the processing of the texture and the 3D model to be imported into a UNITY3D platform. Image rendering can then be done in multiple aspects (such a sound effect or GUI, a plug in, and lighting in UNITY3D platform). The VR scenario model is then populated with interaction software code. To allow rapid input of information in the VR scenario, a 3D input GUI can be made. Any input GUI that can be used with a VR scenario is included in the scope of this disclosure. This includes input GUIs of different dimensions, such as two-dimensional (2D), or four-dimensional (4D).

“FIG. “FIG. A 3D input GUI (for example, an?3D air keypad?) 202 is illustrated. Ten 3D virtual input elements (keys?) are included. For example, the virtual input element number 204 corresponds with the number “3”. In a user field of user 206, the 3D input GUI is displayed as part of a VR scene displayed in VR environment 215. The ten 3D virtual elements are tiled in two rows.

“In a VR scenario, the user can choose a specific virtual input element (for instance, virtual input elements 204) by using a 3D gesture. This will allow them to enter digital information, such as a password, in a particular VR environment. FIG. 2 shows an example of this gesture. FIG. 2 shows that the user executes a 3D gesture (a one-finger click/tap at position 214) in the VR environment. Position 214 corresponds with the virtual input element (204, the?3? key) at position 210 in VR environment 210. Position 214 corresponds to the location of virtual input element 204 (the?3? key) in VR environment 210. The input of the digit is completed by selecting virtual input element 204.

“FIG. 3 illustrates a different 3D input GUI to provide informational input in VR environments, according an implementation of this disclosure. A 3D input GUI 302 is illustrated. It could be a digital magic cube keyboard, or a 3D digital wizard cube keyboard. Includes 27 3D virtual input elements. For example, the virtual input element 302 corresponds to the number “3?”. In a VR environment 310, the 3D input GUI 302 can be seen in a user 306. The 27 3D virtual input elements in a cube are displayed in this VR scenario. Some implementations allow the display of digits or other data, such as images or symbols, in specific patterns or randomly.

“In a VR scenario, the user 308 may select a specific virtual input element (for instance, virtual input element 304) by using a 3D gesture (such a one finger click/tap), and complete digital information, such as a password, using that particular 3D gesture. Other 3D gestures can be used to access virtual input elements. FIG. FIG. 3 shows how the user performs a 3D gesture 312 (“a two-finger turn”) at position 314 within the VR environment. The position 314 corresponds to the location of virtual input element (the?3?) 304. The position 314 corresponds to the location of virtual input element 304 (the?3? key) in VR environment 310. When 3D gesture 312 has been performed, the cube of virtual elements will rotate in a clockwise direction. This allows for other virtual input elements to also be selected.

“The 3D input GUIs shown in FIGS. will be readily understood by anyone with ordinary skill in art. 3 and 2 are examples of possible 3D input GUIs. FIGS. FIGS. 2 and 3 do not limit the disclosure in anyway. Other 3D input GUIs compatible with this disclosure are also included in the scope of this disclosure.

After completing a VR scenario modeling (as described previously), and modeling an associated 3D input GUI, the VR client terminal can be used by a user to output the VR scenario model to a VR client terminal connected to the VR client terminal.

In some cases, the VR client terminal may only output the VR scenario model to users in the VR environment. The VR client terminal can output 3D input GUIs to the VR environment when the user is required to input information in a VR scenario. This happens in response to a user’s triggering preset 3D gesture (such as a virtual button) in the VR environment.

A VR terminal can trigger a virtual trigger element that displays 3D input GUI. A user might take part in a VR shopping experience where many commodities are available for them to choose from. The user can choose a ‘buy/pay? option to complete payment for the selected commodity. Virtual buttons can be used in VR environments by using a preset 3D gesture. The VR client terminal outputs the 3D input GUI. For use in VR scenarios, the VR client terminal outputs a virtual keyboard. You can trigger input by repeating 3D gestures with the “3D checkout counter”.

“2) Recognition of 3D Gestures”

In some cases, the VR terminal can be worn by the user to interact with the VR scenario. The user does not need to use any auxiliary hardware devices such as haptic gloves or joysticks. Instead, they can create 3D gestures while holding their hands up in midair. The 3D gestures that are formed in VR environments (similar to augmented realities) are recorded, analyzed and displayed. Other implementations allow the user to use auxiliary hardware devices in order to control an operational focus, make selections or perform other actions that are consistent with this disclosure.

A user can select a virtual element by performing a 3D gesture (e.g., a button, an operable control or page) in a VR scenario. The disclosure does not limit itself to 3D gestures illustrated in the art. It can be used to select any gesture (3D and other) that is consistent with the disclosure. A 3D gesture, for example, can be simple (such a click/tap gesture), or more complex (such a gesture to select a virtual element). 3D gestures may be static (such a hand position) or dynamic (such a gripping, dragging and rotation). 3D gestures may be modified to meet specific requirements or differentiate between VR content providers.

“FIG. “FIG. FIG. 4 illustrates example 3D gestures:”

“404” can be used to indicate a GUI element in a VR environment. To provide information to users about 3D gestures that are applicable to this VR scenario, the GUI element 404 may be displayed.

“Some implementations of VR client terminals can recognize, using sensing hardware coupled with the VR terminal and in conjunction with a preset gesture recognition algorithm, a user’s 3D gesture in VR scenarios. The VR client terminal can monitor the user’s hand movement in real time using the sensing hardware. The sensing hardware can collect and transmit hand displacement data (for example, position coordinates and a movement track) to the VR client terminal for analysis/processing.”

“In some cases, the sensing hardware may include one or more sensors that collect image-related non-visual, visual (such as infrared, ultraviolet), radar, and ultrasonic data. A dual-camera, multi-camera, time-of-flight (TOF), structured light, or micro-radar solution can all be used to recognize a 3D gesture.

The sensing hardware may include one or more image sensors for 3D gesture recognition using the dual-camera binocular imaging system. The VR client terminal can monitor the hand displacement of the user using a dual camera image sensor. It also collects hand displacement data for analysis and processing. The VR client terminal calculates the hand displacement data using the 3D gesture recognition algorithm. It then calculates the rotation quantity, offset and relative position of the hand to the VR scenario. To create a final 3D gesture model, the VR client terminal uses the offset and calculated rotation quantity.

An infrared sensor can be added to the sensing hardware for 3D gesture recognition using the TOF solution. The sensing hardware may include a laser sensor for 3D gesture recognition using the structured light solution. The sensing hardware may include a radar sensor for 3D gesture recognition using the micro-radar method. The typical implementations of 3D gesture recognition based on the TOF solution and structured light solutions, as well as the micro radar solution, are identical in principle to the dual-camera binary imaging solution. To allow 3D gesture modeling, depth information is calculated in each of these solutions (e.g., using rotation quantity and offset based upon hand displacement data).

The VR client terminal will receive the hand displacement data and can then analyze it using a preset 3D gest recognition algorithm to determine if a 3D gesture has been recognized.

“A 3D gesture can be recognized if it is a recognized gesture.

Based on hand displacement data, some implementations can recognize a 3D gesture and calculate a rotation quantity. The rotation quantity relative to the Z-axes may refer to a deviation angle where a pre-selected feature on the hand is rotated with respect the Z/Y/Z axes of the VR scenario. This serves as a rotation direction when the user’s hand performs a specific 3D gesture. The offset may refer to a horizontal distance between the selected feature point on the user’s hand and the X/Y/Z axes of VR when the user uses the 3D gesture.

The VR client terminal can perform 3D modeling of the gesture based on the offset and rotation quantities. Recognition of the 3D gesture can be completed once a 3D model of the gesture (also known as a 3D gesture model) has been created.

“Combining rotation quantity with offset allows accurate determination of depth information associated to the performance of the 3D gesture performed by the user’s hand.” The depth information determined can be used to 3D model the gesture.

The VR client terminal can define a preset 3D gesture to select particular elements within a VR scene. The VR client terminal recognizes 3D gestures and can determine if it is a preset 3D gesture. The generated 3D gesture model can then be compared to 3D gesture models that are associated with pre-defined 3D gestures (e.g., FIG. 4). 4

“When wearing the VR terminal, the user’s vision is usually restricted. A user can’t see a 3D gesture made with their hand. This can affect the position and accuracy of 3D gestures associated with the user’s hands. The VR client terminal can output an operating focus in the VR environment relative to the VR scenario. This will mitigate these issues. The user’s 3D gesture is used to determine the operational focus and the spatial displacement. The VR client terminal can calculate in real time the operational position of the user’s hand for display in the VR environment. The hand displacement data is used to calculate the operational position. This data is collected using the sensing hardware that tracks the hand of the user in real time. This allows the user to display and move in VR the real-world 3D gestures.

“In some cases, the operational center can be an icon, pointer or other graphical element. For example, a representation for a human hand. To simulate a 3D gesture, the operational focus can be animated. The VR client terminal can generate a 3D model of a user’s gesture using 3D gesture modeling. This is done based on depth information about the user’s hand. The 3D gesture modeling can render a 3D animation of the associated hand parameters, such as position coordinates and displacement changes. To correspond with the synchronous spatial displacement of a user’s hand, the rendered 3D gesture animation may be used as an operational focus.

The user can observe in VR the 3D gestures made by their hands. The VR client terminal may visually prompt the user (for example with animated animations) to modify or change their 3D gesture to conform to one of several preset 3D gestures. This allows the user to correct the formation of a 3D geste and reduces the chance of an inadvertent or incorrect input.

“In some cases, the VR client terminal may be able to store pre-made 3D gesture models that can be matched against the 3D gesture model generated by the user. The VR client terminal recognizes the 3D gesture and can choose several feature points from both the generated 3D model and the preset 3D gest model. To determine whether the 3D gesture of the user matches the preset 3D gesture, the VR client terminal performs matching operations with the selected feature points. The VR client terminal can use a preset threshold to determine if there is an exact match between the 3D gesture models and the preset 3D geste model. If the preset threshold value is exceeded or met, then the generated 3D gesture and preset 3D gest model can be considered match. The VR client terminal can execute a preset similarity algorithm to determine if the match is possible in some cases. You can set the preset threshold value to different values depending on your VR scenario. For example, actions that require fine-grained selections might have a preset limit with a high value while general rotation operations could have a preset limit with a lower value.

“3) Informational input”

The VR client terminal will determine if the 3D gesture is a preset 3D gesture. It can then calculate the VR environment’s operational position and select the VR element that corresponds to that operational position. The VR client terminal can, for example, calculate the operational position of the user’s hand in the VR environment based on the hand displacement data that is collected by the sensor hardware. This data is obtained by tracking the hand of a user in real-time. The VR client terminal can then search the VR environment for the appropriate virtual element and select it after calculating the operational location.

The VR client terminal can locate the operational focus and determine the operational position for the hand of the user within the VR environment if it is already visible in the VR environment. The VR client terminal can then search for the virtual element indicated by the operational focal and select it.

In some cases, to minimize the risk of operational failures due to a prolonged duration associated with a 3D gesture by the user, the VR terminal can establish a preset duration threshold that will indicate whether the gesture is valid. The VR client terminal recognizes the 3D gesture and calculates the overall duration. This is the time between the moment the gesture was initiated by the user and the moment it is completed. It then determines whether the duration is less than the preset duration threshold. The 3D gesture will be considered valid if it lasts for less than the preset duration threshold. The 3D gesture will be invalid if it lasts longer than the preset duration threshold. The VR client terminal can determine if the 3D gesture has been validated.

The preset duration threshold value can be adjusted to suit specific VR scenarios or VR-related operations. The preset duration threshold can be shortened to two seconds, for example, so that users can quickly form 3D gestures to select virtual elements in VR scenarios.

Is it possible to assume that a preset virtual trigger elements is available in advance in VR scenarios that allows a user trigger the VR client terminal so that they can output a preset 3D GUI? If the user needs to input information in VR scenarios, the user can move the operational focus to the location of the virtual trigger elements to trigger them. This is done by keeping the operational focal point above the virtual trigger elements. The user can use an external device to control the movement and position of the operational focal point, such as a joystick, or a handle. A gravity sensor can also be pre-installed onto the VR terminal. Or, the user can use a corresponding gravity sensing device on their hand to control the movement.

The user can perform a preset 3D gesture by placing a hand in the indicated position of the operational focus. The VR client terminal will recognize that the 3D gesture the user made was a preset 3D gesture. If the operational focus remains above the virtual trigger, the virtual element indicated in the operational focus will be considered the virtual trigger for actuation. The VR client terminal has the ability to select the virtual trigger element and trigger the VR scenario to produce the 3D input GUI.

The VR terminal displays a 3D input GUI. Users can then input information using virtual input elements that are associated with the 3D output GUI. The user can set the operational focus to hover over a specific virtual input element and then use a 3D gesture with the hand to select that virtual input element. Some implementations allow for a preset 3D gesture to select the virtual trigger element, which can be different from the preset 3D gesture that is used in the 3D input GUI.

After confirming that the 3D gesture used by the user is a pre-set gesture, the VR client terminal calculates the operational position of the 3D gesture relative to the VR scenario. The VR client terminal then can search the 3D GUI for the virtual input element that corresponds to the operational position and choose the element. The VR client terminal may highlight the virtual input element within the 3D input GUI in some cases. The 3D input GUI can highlight the selected virtual input element by using flashing or a different color.

The VR client terminal can then read the information associated with the virtual input element by selecting it in the 3D GUI. You can use the read information for informational input. To input additional information, the user can choose to select virtual input elements (e.g. sequences of alphanumeric characters that form a password). If a payment password has been entered, the purchase of a particular commodity can be made after it is verified by a payment server.

“FIG. “FIG.5” is a block diagram that illustrates an example of 500 computing modules used to input operations for a VR client terminal. This diagram was created according to the implementation of the present disclosure. A VR scenario is a virtual reality client terminal that outputs a VR environment to a VR terminal. A 3D input GUI is part of the VR scenario. Some implementations include a preset virtual trigger that triggers the output of the 3D input GUI. The 3D input GUI may be a 3D virtual keypad or a 3D virtual keyboard. In some cases, the virtual input element could be a virtual key that is associated with the 3D keyboard. An operational focus corresponds to a 3D gesture. The VR scenario also includes synchronized displacement with the 3D gesture. Some implementations display the operational focus as animation that simulates the 3D gesture.

“A recognition module 501 can recognize 3D gestures of users in VR scenarios. The recognition module 501 can also be used to: 1) track the user’s hand displacement using preset sensing equipment; 2) collect hand displacement data from the user. 3) Calculate a rotation quantity, and an offset for the user’s hand with respect to the X/Y/Z VR scenario based on hand displacement data; 4) perform 3D modeling based upon the offset and rotation quantities to get the appropriate 3D gesture. When the 3D gesture of a user is recognized, a judgment module 502 can determine if the gesture is a preset 3D one. If the 3D gesture recognized is the preset 3D gest, a calculation module 503 can calculate the operational position of the 3D gesture that corresponds to the 3D input GUI. The calculation module 503 can also be configured to: 1) Calculate the duration for the 3D geste before computing the operational position for the 3D input Gui; 2) assess whether the duration is less than a preset threshold; 3) perform the operation to calculate the operational position for the 3D geste corresponding to 3D input GUI. If the duration is shorter than the preset threshold, then the calculation module 503 will execute the operation to calculate the 3D position of that 3D gesture. A virtual input module 504 can be used to select an element that is associated with the 3D interface and corresponds to the calculated operational location. The input module 504 can also read the informational input provided by the virtual input elements. The input module 504 can be configured to highlight the virtual input element selected in the 3D input GUI. A 505 output module that outputs the 3D input GUI at the position where the virtual trigger elements are located.

“FIG. “FIG. 5. According to the implementation of this disclosure. FIG. FIG. 6 shows the VR client terminal. It includes a central processing device (CPU), a computer memory 604, and a nonvolatile storage unit 606, as well as a network interface 610 and an internal bus 612. FIG. FIG. 6 may include or be included by the computing systems illustrated in FIG. 7. Some components shown in FIGS. may not be used in some implementations. 6. and 7. can be considered the same (for instance, CPU 602/Processor 705, Memory 604/Memory 707). In some implementations, the computing modules 500 used for input operations associated with a VR client terminal can be loaded, stored, and executed by CPU 602 in Memory 604 (as a software-hardware-combined logic computing system).”

After considering the details and the practice of the subject matter, ordinary skilled in the art should be able to imagine other implementations of this disclosure. The present disclosure covers all variations, uses, or adaptations to the subject matter described. This includes variations, usages or adaptations that are consistent with the general principles and common knowledge as well as non-disclosed technical methods in the art. These examples are provided to aid understanding of the concepts and do not limit the scope of the disclosure.

“It is important to understand that the disclosure does not limit itself to the described implementations, or the illustrations in the accompanying drawings. You can make many modifications or changes without affecting the scope of this application. Only the claims included in the present application limit the scope of the application. Modifications, equivalent replacements or improvements made in accordance with the spirit and principles described are included within the protection scope of this disclosure.

“FIG. “FIG.7 is a block diagram that illustrates an example of a computer implemented system 700 used for computing functionalities related to described algorithms, methods and functions. System 700 is illustrated with a computer 702 as well as a network 730.

The illustrated computer 702 can include any computing device including a computer that is a server, desktop, laptop/notebook computer or wireless data port, smart device, personal data assistant (PDA), tablet computing gadget, one or more processors within these devices, another device or a combination thereof, including physical and virtual instances of the computing devices or a combination physical and virtual instances. The computer 702 may also include an input device such as a keyboard, touch screen, keypad or another input device. An output device that transmits information about the operation of the computer 702, which can include digital data, visual, audio or any combination thereof, can be included in the computer 702.

“The computer 702 may serve as a client, server, network component, server, database, persistency, or any combination thereof in a distributed computing environment. The computer illustrated 702 can be communicably connected to a network 730. In some implementations, one or more components of the computer 702 can be configured to operate within an environment, including cloud-computing-based, local, global, another environment, or a combination of environments.”

“A computer 702 at a high level is an electronic computing device that can receive, transmit and process data, store or manage information related to the subject matter. Some implementations allow the computer 702 to be communicably coupled or include a server such as an application server or web server, caching or streaming data server, or any combination thereof.

The computer 702 can receive and process requests via network 730, such as from client software applications running on another computer 702, and then respond to those requests by using software or a combination thereof. Requests can be sent to the 702 by internal users, such as from a command console, another internal access method, external or third-parties or entities, individuals or systems.

“Each component of the computer 702 is able to communicate with the system bus 703. Some implementations allow any component of the computer 702, including software and hardware, to communicate over a system bus 703. API 712 specifications can contain data structures, routines, and object classes. The API 712 may be computer-language dependent or independent and can refer to a complete interface or a single function or a whole set of APIs. The service layer 713 provides software services for the computer 702 and other components, whether illustrated or unillustrated, that can be communicably coupled with the computer 702. All service consumers can access the functionality of the computer 702 via this service layer. The service layer 713 provides software services that provide defined, reusable functionalities via a defined interface. The interface could be written in JAVA or C++ or any combination of computing languages that provide data in extensible Markup Language (XML), another format, or a combination thereof. Although the API 712 and the service layer 713 are shown as integrated components of the computer 702, other implementations may show them as separate components. Furthermore, the API 712 and the service layer 713 may be implemented as children or sub-modules of other software modules, enterprise applications, or hardware modules without departing completely from the scope the present disclosure.

“The computer 702 has an interface 704. In FIG. 7 shows one interface 704 in FIG. The computer 702 uses the interface 704 to communicate with other computing systems (illustrated or not), which are communicatively linked with the network 730 in distributed environments. The interface 704 can communicate with the network 730 in general. It may contain logic encoded in either software or hardware or a combination of both software and hardware. The interface 704 may include software that supports one or more communication protocols. This means that the interface 730, or interface’s hardware, can communicate physical signals inside and outside the illustrated computer 702.

“The computer 702 contains a processor 705. In FIG. 7, the processor 705 is shown as a single processor. 7. Two or more processors may be used depending on specific needs or desires. The processor 705 generally executes instructions and manipulates data in order to perform the operations of computer 702 as well any algorithms, methods functions, flows, or procedures described in this disclosure.

“The computer 702 also contains a database 706, which can store data for another component communicatively connected to the network 730 (whether it is illustrated or not). Or a combination of the 702 and another component. Database 706 could be either an in-memory or conventional database, as well as any other type of database that stores data consistent with this disclosure. Database 706 may be one of several database types, such as an in-memory, conventional, or hybrid database, depending on the particular requirements, wishes, or implementations of the computer 702 or the described functionality. FIG. 706 shows one database, but it can be used as many databases as you need. 7 can be used to create different databases according to specific needs and desires. Database 706 is shown as an integral part of the computer 702, but alternative implementations allow database 706 to be used.

“The computer 702 also contains a memory 707 which can store data for another component or components communicatively connected to the network 730 (whether shown or not). Or a combination of the 702 and another part. Any data compatible with the present disclosure can be stored in memory 707. Memory 707 may store any data consistent with the present disclosure in some implementations. Although shown in FIG. 707 as one memory, it can be used by multiple memories. FIG. 7 shows one memory 707 but two or more memories 707 can be used depending on specific needs and desires. Memory 707 is shown as an integral part of the computer 702, but alternative implementations allow memory 707 to be used outside the computer 702.

“The application 708 is an algorithmic, software engine that provides functionality according to specific needs, desires or particular implementations for the computer 702, especially with respect to the functionality described in this disclosure. Application 708, for example, can be used as one or more modules, components, or applications. Additionally, the application 708 is not limited to a single application 708, but can be used as multiple applications 708 on a computer 702. The application 708 is not required to be installed on the computer 702, but alternative implementations allow it to be added to the 702.

“The computer 702 may also have a power supply 714. The power supply 714 can include a rechargeable or non-rechargeable battery that can be configured to be either user- or non-user-replaceable. The power supply 714 may include power-conversion and management circuits. This includes standby, recharging, and other power management functions. The power supply 714 may include a plug that allows the computer 702’s power source to be plugged into a wall socket.

“Any number of computers 702 can be associated with or external to a computer system containing computer 702, each computer 702 communicating via network 730. The term “client” is also used. ?user,? ?user,?, or any other appropriate terminology may be interchanged, as long as they do not depart from the scope and meaning of the present disclosure. The present disclosure also allows multiple users to use the same computer 702 or one user can use multiple computers 702.

“Described implementations can include one or more features alone or in combination.”

“For example, in the first implementation, a computer implemented method that includes: receiving hand displacement information from sensing hardware; analysing the received hand displacement dataset using a three-dimensional (3D), gesture recognition algorithm; recognizing that the 3D gesture is a 3D gesture, computing an operational position for the gesture relative to a 3D user interface (GUI); selecting the virtual input element associated to the 3D GUI and corresponding with the calculate operational location; and reading the input information correspondant to the selected virtual element.

“The described implementations of the foregoing can optionally include one or more features:

“A first feature that can be combined with any of these features where the sensing hardware comprises one or more image sensor, laser sensor, or radar sensors.”

“A second feature that can be combined with any of the preceding or following features where the sensing hardware comprises one or more image sensor, laser sensor, or radar sensors.”

Click here to view the patent on Google Patents.

How to Search for Patents

A patent search is the first step to getting your patent. You can do a google patent search or do a USPTO search. Patent-pending is the term for the product that has been covered by the patent application. You can search the public pair to find the patent application. After the patent office approves your application, you will be able to do a patent number look to locate the patent issued. Your product is now patentable. You can also use the USPTO search engine. See below for details. You can get help from a patent lawyer. Patents in the United States are granted by the US trademark and patent office or the United States Patent and Trademark office. This office also reviews trademark applications.

Are you interested in similar patents? These are the steps to follow:

1. Brainstorm terms to describe your invention, based on its purpose, composition, or use.

Write down a brief, but precise description of the invention. Don’t use generic terms such as “device”, “process,” or “system”. Consider synonyms for the terms you chose initially. Next, take note of important technical terms as well as keywords.

Use the questions below to help you identify keywords or concepts.

  • What is the purpose of the invention Is it a utilitarian device or an ornamental design?
  • Is invention a way to create something or perform a function? Is it a product?
  • What is the composition and function of the invention? What is the physical composition of the invention?
  • What’s the purpose of the invention
  • What are the technical terms and keywords used to describe an invention’s nature? A technical dictionary can help you locate the right terms.

2. These terms will allow you to search for relevant Cooperative Patent Classifications at Classification Search Tool. If you are unable to find the right classification for your invention, scan through the classification’s class Schemas (class schedules) and try again. If you don’t get any results from the Classification Text Search, you might consider substituting your words to describe your invention with synonyms.

3. Check the CPC Classification Definition for confirmation of the CPC classification you found. If the selected classification title has a blue box with a “D” at its left, the hyperlink will take you to a CPC classification description. CPC classification definitions will help you determine the applicable classification’s scope so that you can choose the most relevant. These definitions may also include search tips or other suggestions that could be helpful for further research.

4. The Patents Full-Text Database and the Image Database allow you to retrieve patent documents that include the CPC classification. By focusing on the abstracts and representative drawings, you can narrow down your search for the most relevant patent publications.

5. This selection of patent publications is the best to look at for any similarities to your invention. Pay attention to the claims and specification. Refer to the applicant and patent examiner for additional patents.

6. You can retrieve published patent applications that match the CPC classification you chose in Step 3. You can also use the same search strategy that you used in Step 4 to narrow your search results to only the most relevant patent applications by reviewing the abstracts and representative drawings for each page. Next, examine all published patent applications carefully, paying special attention to the claims, and other drawings.

7. You can search for additional US patent publications by keyword searching in AppFT or PatFT databases, as well as classification searching of patents not from the United States per below. Also, you can use web search engines to search non-patent literature disclosures about inventions. Here are some examples:

  • Add keywords to your search. Keyword searches may turn up documents that are not well-categorized or have missed classifications during Step 2. For example, US patent examiners often supplement their classification searches with keyword searches. Think about the use of technical engineering terminology rather than everyday words.
  • Search for foreign patents using the CPC classification. Then, re-run the search using international patent office search engines such as Espacenet, the European Patent Office’s worldwide patent publication database of over 130 million patent publications. Other national databases include:
  • Search non-patent literature. Inventions can be made public in many non-patent publications. It is recommended that you search journals, books, websites, technical catalogs, conference proceedings, and other print and electronic publications.

To review your search, you can hire a registered patent attorney to assist. A preliminary search will help one better prepare to talk about their invention and other related inventions with a professional patent attorney. In addition, the attorney will not spend too much time or money on patenting basics.

Download patent guide file – Click here