Invented by Geoffrey Wedig, Sean Michael Comer, James Jonathan Bancroft, Magic Leap Inc

The market for skeletal systems for animating virtual avatars has been growing rapidly in recent years. With the rise of virtual reality and augmented reality technologies, the demand for realistic and lifelike avatars has increased significantly. Skeletal systems are an essential component of these avatars, providing the necessary framework for realistic movement and animation. Skeletal systems are essentially a set of interconnected bones that are used to create a virtual skeleton for an avatar. These bones are then rigged with a set of controls that allow animators to manipulate them and create realistic movements. The quality of the skeletal system is critical in determining the quality of the animation and the overall realism of the avatar. The market for skeletal systems for animating virtual avatars is primarily driven by the gaming and entertainment industries. Video game developers and animation studios are constantly looking for ways to improve the realism of their characters and create more immersive experiences for their audiences. As a result, they are investing heavily in the development of advanced skeletal systems that can provide more realistic movement and animation. In addition to the gaming and entertainment industries, the market for skeletal systems is also growing in other areas such as education, healthcare, and military training. Virtual reality and augmented reality technologies are being used to create immersive training simulations for a variety of industries, and realistic avatars are an essential component of these simulations. The market for skeletal systems is highly competitive, with a large number of companies offering a wide range of products and services. Some of the key players in the market include Autodesk, Unity Technologies, Mixamo, and Reallusion. These companies offer a variety of skeletal systems, ranging from basic systems for simple animations to advanced systems for complex character animations. One of the key trends in the market for skeletal systems is the development of AI-powered systems that can create realistic animations automatically. These systems use machine learning algorithms to analyze motion capture data and generate realistic animations without the need for manual input from animators. This technology is still in its early stages, but it has the potential to revolutionize the animation industry and make it easier and faster to create realistic avatars. In conclusion, the market for skeletal systems for animating virtual avatars is a rapidly growing industry that is driven by the demand for more realistic and immersive experiences. The gaming and entertainment industries are the primary drivers of this market, but it is also growing in other areas such as education, healthcare, and military training. With the development of AI-powered systems, the future of this market looks bright, and we can expect to see even more advanced and realistic avatars in the years to come.

The Magic Leap Inc invention works as follows

Skinning parameters can include joint transforms and mesh weights for a skeleton. Systems and methods are provided for determining skinning parameters using an optimization process subject to constraints based on human-understandable or anatomically-motivated relationships among skeletal joints. The input to the optimization can be a high order skeleton, and the constraints applied can change dynamically during the optimization. “The skinning parameters are useful in linear blend skinning applications (LBS) in augmented reality.

Background for Skeletal Systems for Animating Virtual Avatars

Modern computing and display technology has facilitated the development systems for so-called?virtual realities? ?augmented reality,? “Mixed reality” and “augmented reality” are two examples of a digitally reproduced image that is presented to a user in a way that makes it appear real. Digitally reproduced images can be presented in such a way that they appear real or are perceived to be so by the user. Virtual reality (VR), on the other hand, is a scenario where computer-generated images are presented without any transparency with real-world input. In an augmented reality scenario (AR), virtual image data is presented as a supplement to the visualization of the real world. Mixed reality (MR), a form of augmented reality, is a situation in which real and virtual objects can co-exist in real time. The systems and methods described herein address a variety of challenges related to VR technology, AR technology and MR.

Skinning parameters can include joint transforms and mesh weights for a skeleton. Systems and methods are provided for determining skinning parameters using an optimization process subject to constraints based on human-understandable or anatomically-motivated relationships among skeletal joints. The input to the optimization can be a high order skeleton, and the constraints applied can change dynamically during the optimization. Skinning parameters are useful in linear blend skinning applications (LBS), such as augmented reality, games, movies or visual effects.

Examples of systems and methods are provided for improving or optimising mesh weights (e.g. skin) and joint transforms in animation of a digital avatar. Joint transforms may include multiple degrees of freedom (e.g. 6). The systems and methods are able to accept target pose data from various input sources (e.g. photogrammetric scanners, artist-driven sculptures, simulations or biometrically-derived models), and generate high-fidelity transforms and weights for use with linear blend skinning. By solving a constrained problem, the systems and methods are able to calculate weights and joint transformations (e.g. rotations and translations). The optimization can only be restricted to those solutions which provide high-fidelity under generalized conditions. This includes the use of novel animation performances or real-time captured animations. This can lead to a reduction in the number samples (often costly and time-consuming to produce) or joint transforms to meet quality metrics.

The systems and methods described are not limited to augmented, mixed or virtual reality. They can also be used for gaming, movies or visual effects.

Details about one or more implementations are provided in the accompanying drawings as well as the description. The claims, drawings, and description will reveal other features, aspects, or advantages. This summary and the detailed description below do not attempt to limit or define the scope of the inventive subject matter.

Overview

A virtual avatar is a representation of an actual or fictional person, creature or object in an AR/VR/MR setting. In a telepresence scenario, where two AR/VR/MR user interact with each other, the viewer may perceive the avatar of another user within the viewer’s surroundings, creating a tangible feeling of the other’s presence. Users can interact and work together using avatars in a virtual shared environment. A student in an online classroom can interact with the avatars of students and teachers. Another example is a player in a virtual environment (AR/VR/MR) who can interact with the avatars of players.

Embodiments” of the disclosed systems or methods can provide improved animation and more realistic interactions between the user of the wearable device and avatars within the user’s surroundings. The examples of this disclosure describe animating an avatar that is shaped like a person. However, the same techniques can be used to create animals, imaginary creatures, objects etc.

A wearable device may include a display that presents an interactive VR/AR/MR world with a high-fidelity digital avatar. A specialized team can spend many weeks or even months creating a high-fidelity digital avatar. They can use a large number high-quality digitized photographs of the human model. Embodiments disclosed by the technology can create high-quality or high-fidelity avatars of any character, human, or animal. To achieve this, embodiments are less resource intensive and faster while maintaining accuracy.

As an illustration, a digital image of a person (or any other animal or object that can be deformed, such as clothing or hair), may include a mesh over the skeleton (to show, for example, the outer surface which could be skin, clothes, etc.). A bone can be linked to another bone at joints. It can also have mesh vertices attached to it so that, when the bone moves the mesh vertices move along with it. Each vertex may have multiple bone assignment, with the vertex motion being interpolated by the combination of the bone movements. This initial movement can be called a “skin cluster”. This captures the gross movements. It should be noted that bones and skeletons are digital constructs, and may not correspond to the actual bones of the human body. It may be necessary to take a second step to model the human, the virtual representation of whom is sometimes called an avatar herein. This is to capture the finer movements of skin. This next step is a difference from the original gross movement in order to capture finer skin movements.

To animate an avatar, wearable devices can use linear blend skinning techniques (LBS), in which the vertices are transformed according to a linear weighted summation of rotations and translated joints of the underlying body skeleton. “Calculating the weights and rotations of joints (collectively known as skinning parameter) for a number of avatar poses is a difficult problem.

Human animators can assign mesh vertex to bones by setting weights for each vertex. These assignments are made by human animators based on their unique, subjective artistic vision. There are as many manual skinning methods as there are animators. This can be a time-consuming and laborious process, which is not suited to creating skins that are suitable for real-time applications, such as mixed, augmented or virtual reality.

The present application describes systems and methods for computing skinning parameters from various sources, including photogrammetric scans, artist-driven models, simulations or biometrically derived poses. The systems and method can use constrained optimization techniques, which are seeded with initial target poses, high-order skeletons and skin clusters to generate skinning parameter under human-understandable, and biologically-motivated constraints. These systems and methods are particularly useful in real-time and can reduce the need for animator input. These computational methods and systems are based on objective, unorthodox rules that can be applied algorithmically to generate skinning parameters that are different from the way human animators perform animation using their subjective artistic vision.

Below, we will provide a variety implementations of skinning parameter calculation systems and methods.

Examples for 3D Display of a Wearable Device

A wearable system, also referred to as an AR system herein, can be configured to display 2D or 3D images to the user. Images can be frames from a video or still images. They may also be videos, combined with still images or other combinations. A wearable device can implement at least a part of the wearable system. This wearable device can display a VR or AR or MR environment for interaction. Wearable devices can be interchangeably used as AR devices (ARD). For the purposes of this disclosure, the term “AR” is used interchangeably with the term “MR?.” The term ‘MR?

FIG. FIG. 1 shows an illustration of a mixed-reality scenario, with certain virtual objects and some physical objects that can be viewed by a human. FIG. FIG. 1. A MR scene 100 depicts a user of MR technology. The scene shows a person imagining a park-like setting 110 with people, trees, buildings, and a concrete platform 120. The MR technology user also perceives these items. A robot statue 130 stands upon the real-world platform 120. A cartoon-like avatar character 140 is flying past, which appears to be the personification of a Bumble Bee.

It may be beneficial for 3D displays to have an accommodative response that corresponds to the virtual depth of each point. The human eye can experience accommodation conflicts if the accommodative response of a display point doesn’t correspond to its virtual depth as determined by stereopsis and binocular depth cues. This could lead to unstable imaging, headaches, and even complete absence of surface depth.

Display systems that provide images corresponding to a variety of depth planes to viewers can offer VR, AR, and MR experiences. Each depth plane may have different images. This allows the viewer to see depth cues by observing differences in image features or the accommodation required to bring them into focus. These depth cues, as discussed in the past, provide credible perceptions about depth.

FIG. The example 2 shows a wearable system that can be configured in order to create an AR/VR/MR environment. Wearable system 200 is also referred to by the term AR system 200. The wearable system includes a display, as well as various electronic and mechanical modules to support display 220. The frame 230 can be worn by the user, wearer or viewer 210. The display 220 may be placed in front of the user’s eyes 210. The display 220 presents AR/VR/MR to a user. The display 220 may be a head-mounted display (HMD), which is worn by the user.

In some embodiments a speaker (not shown) is located adjacent to the ear canals of the user. The display 220 may include an audio sensor 232 (e.g. a microphone), which can detect an audio stream and capture ambient sounds. In certain embodiments, other audio sensors (not shown) are placed to receive stereo sound. Stereo sound reception is useful for determining the location of sound sources. The wearable system can recognize voice or speech on audio streams.

Click here to view the patent on Google Patents.