Metaverse – Philipp K. Lang, Individual

Abstract for “Augmented Reality Guided Fitting, Sizing, Trialing, Balancing and Balance of Virtual Implants on the Physical Joint of a Patient for Manual and Robot-Assisted Joint Replacement”

“Devices, methods and devices for performing a surgery step or procedure with visual guidance using an optic head mounted display are disclosed.”

Background for “Augmented Reality Guided Fitting, Sizing, Trialing, Balancing and Balance of Virtual Implants on the Physical Joint of a Patient for Manual and Robot-Assisted Joint Replacement”

“Computer assisted surgery, e.g. Pre-operative imaging can be used for robotics or surgical navigation. An external monitor can display the imaging studies and the patient’s anatomy. The information on the monitor can also be used to register landmarks. Hand-eye coordination can prove difficult for surgeons because the surgical field is at a different location. The surgeon has to view the information on the monitor from a different coordinate system.

“Aspects include the ability to simultaneously visualize live data, such as the patient’s spine or joint. A digital representation of the patient’s spine and joint and virtual data, such as virtual cuts and/or surgical guides that include cut blocks and drilling guides, can be viewed through an optical head mounted monitor (OHMD). Some embodiments register the surgical site, including the live data of patient and the OHMD, along with the virtual data in a common coordinate scheme. Some embodiments overlay the virtual data onto the patient’s live data and align them with it. The OHMD is a virtual reality head system that blends live data. However, it allows the surgeon to view the live data of the patient. The surgeon can view the surgical field while simultaneously observing virtual data, such as the patient’s position or orientation, and/or virtual surgical instruments and implants.

“Aspects describe novel devices that allow for visual guidance with an optical head mounted monitor to perform a surgical step. By displaying virtual representations one or more of a Virtual Surgical Tool, Virtual Implant Component, Virtual Implant or Virtual Device, a predetermined starting point, predetermined position, and predetermined orientation or Alignment, Predetermined Intermediate Point(s), Predetermined Intermediate Position(s), Predetermined End Position, and predetermined End orientation or Alignment, Predetermined Path, Predetermined Plan, or Predetermined Cut Plan, predetermined contour, outline, cross-section, or surface features, or projection, or predetermined depth marker, or rotation marker, or angle, or orientation, or Rot Marker, or rotation marker, or predetermined axis Rotation axis. Flexion axis. Extension axis. Predetermined axis.

“Aspects relate to a device with at least one optical head mounted LCD, which is configured to generate virtual surgical guides. The virtual surgical guide can be a digital representation of three dimensions that corresponds to at most one of the following: a portion of a surgical guide, a placement indicator for a surgical guide or a combination thereof. One embodiment of the inventions includes at least one optically mounted display that displays the virtual surgery guide superimposed on a physical joint. The coordinates of the predetermined position of virtual surgical guides are used to determine the alignment of the virtual surgeon guide and a saw blade to guide the bone cutting of the joint.

“In some embodiments, the device includes one, two, or three optical head mounted displays.”

“In certain embodiments, the virtual surgeon guide can be used to guide a bone in a hip replacement, knee replacement, shoulder replacement or ankle replacement.

“In certain embodiments, the virtual surgical guides include a virtual slot to allow for either a virtual or physical saw blade.”

“In certain embodiments, the virtual surgical guides include a planar area that allows for alignment of a virtual or physical saw blade.”

“In some embodiments, a virtual surgical guide may include two or more virtual guides holes or paths that allow for the alignment of two or more physical drills and pins.”

“In some embodiments, predetermined positions of the virtual surgical guides include anatomical and/or alignment information for the joint. An example of this is the anatomical and/or alignment information for the joint. It can be based on coordinates of a joint, anatomical axis or biomechanical axis. Or combinations of these.

“In some embodiments, at least one optical head mounted monitor is used to align the virtual surgery guide based upon a predetermined alignment of limbs. The predetermined limb alignment could be, for example, a normal mechanical alignment of a leg.

“In some embodiments, at least one optical mount mounted display is used to align the virtual surgery guide based upon a predetermined rotation of a femoral component or tibial constituent. Some embodiments have at least one optical mount mounted display that is capable of aligning the virtual surgical guide using a predetermined flexion or slope of a component of the femoral.

“In certain embodiments, the virtual surgeon guide can be used to guide a cut in the proximal foemoral bone based on a predetermined leg size.”

“In some embodiments, a virtual surgical guide can be used to guide a bone cutting of a distal toibia or a trochanter in an ankle joint replacement. The at least one optical mount display is configured for aligning the virtual surgical guides based on a predetermined alignment of the ankle, which includes a coronal, sagittal, and axial plane component alignments, as well as an implant component rotation, or combinations thereof.

“In some embodiments, a virtual surgical guide can be used to guide a bone cutting of a proximal hip in a shoulder replacement. The at least one optical-head mounted display is designed to align the virtual surgery guide according to a predetermined alignment of the humeral implants component. This alignment may include a coronal, sagittal, and axial plane alignments for implant components, as well as combinations thereof.

“In some embodiments, pre-operative or intraoperative imaging studies, one or more intraoperative measurements, or combinations thereof, determine the surgical guide’s predetermined position.”

“Aspects” of the invention concern a device that includes two or more optical heads mounted displays for two or multiple users. The device is designed to generate a virtual surgery guide. This virtual surgical guidance is a three-dimensional representation of a surgical guide in digital format. It is superimposed on a physical joint using coordinates of a predetermined location of the virtual guide. The virtual surgical guides are configured for alignment of the physical surgical and a saw blade to guide the bone cut.

Aspects of this invention concern a device that includes at least one optical-head mounted display and a virtual bony cut plane. The virtual bone cutting plane is used to guide a joint’s bone cuts. The optical head mounted LCD is designed to overlay the virtual bony cut plane onto a physical joint using coordinates from the predetermined location of the virtual bones cut plane. The virtual bone cutting plane can be used to guide a bone in predetermined orientations such as a predetermined varus, valgus, or a predetermined slope of the tibia or a predetermined amount of flexion in an implant component.

One aspect of the invention concerns a method for preparing a joint in a patient for a prosthesis. The method may include registering one of more optical head-mounted displays worn by a surgeon, surgical assistant, in a coordinate systems, taking one or several intra-operative measures from the patient to determine one, or more, intra-operative coordinates. Using the coordinates of predetermined positions of the virtual guide, creating a virtual surgery guide and superimposing it onto the patient’s joint using the one, more or all of the optical head-mounted displays.

“In some embodiments, one or more optical head mounted display are registered in a shared coordinate system. The common coordinate system may be a shared coordinate scheme in some embodiments.

“In certain embodiments, the virtual surgeon guide can be used to guide a bone in a hip replacement, knee replacement, shoulder replacement, ankle replacement or joint replacement.

“In some embodiments, the predetermined location of the virtual surgical guidance determines the tibial slope to implantation of one or several tibial component components in a knee substitute. Some embodiments allow the angle of varus and valgus correction to be determined by the predetermined position.

“In certain embodiments, the virtual surgery guide corresponds with a physical distal fomoral guide or cutblock and the predetermined location of the virtual guide determines a component flexion.”

“In certain embodiments, the virtual surgeon guide corresponds with a physical anterior/poster femoral surgical block or cut block. The predetermined position and orientation of the virtual guide determines a rotation of a femoral part.”

“In certain embodiments, the virtual surgeon guide corresponds with a physical cut block or chamfer femoralguide.”

“In certain embodiments, the virtual surgeon guide corresponds with a physical multi-cut cut femoral guide/cut block. The predetermined position determines one or more of an posterior cut, anterior cut, chamfer cut and a rotation of a femoral part.”

“In some embodiments, a virtual surgical guide is used for a hip replacement. The predetermined position determines the leg length after implantation. The virtual surgical guide can be used to align the blade of the saw to guide the bone cutting.

“In some embodiments, one or more intraoperative measures include detecting one, more, or all of the following: a patient’s joint, an operating room table, fixed structures, or combinations thereof. One or more cameras, image capture, or video capture systems integrated in the optical head mounted monitor detect one or several optical markers, including their coordinates (xy, z), and at least one of a position or orientation, alignment or direction of movement.

“In certain embodiments, the use of spatial mapping techniques can be used to register one or more optical head mounted displays or surgical site, joint or spine, surgical instruments, implant components, or surgical instruments.”

“In some cases, depth sensors can be used to register one or more optical head mounted displays, surgical sites, joints, spines, surgical instruments, or implant components.”

“In some embodiments, the virtual surgery guide is used for guiding a bone cut from a distal tip of a tibia or the talus in an an ankle joint replacement. The one or more optical heads mounted displays are used to align the virtual surgeon’s guide based upon a predetermined alignment of tibial and talar implants component alignments. This alignment includes a coronal, sagittal, and axial plane component alignments. An implant component rotation can also be performed

“In some embodiments, the virtual surgery guide is used for guiding a bone cut from a proximal hip in a shoulder replacement. The one or more optical heads mounted displays are used to align the virtual surgeon based on a predetermined alignment of the humeral implants component. This alignment may include a coronal, sagittal, and axial plane component alignments. The invention relates to a system that includes at least 1 optical head mounted monitor and a virtual database of implants. In this embodiment, the virtual surgical guide is used to align the virtual component with the tissue that will be used to place the implant components.

“Various exemplary embodiments of the invention will be described in detail hereinafter, with reference to the accompanying illustrations, which include some examples. However, the present inventive concept can be implemented in many other forms, so it should not be taken to mean that only the examples shown herein are all possible. These example embodiments are intended to demonstrate the scope of the inventive concept and be comprehensive and complete. The relative sizes of layers or regions in the drawings may be exaggerated to increase clarity. “Like numerals” refer to the same elements in all of the drawings.

The term “live data” of the patient refers to the surgical site, anatomy and any tissues or anatomic structures, pathology, pathology, pathology, pathologic structures, tissues, and/or pathology as it is seen by the surgeon’s eyes. This includes information not obtained from virtual data, stereoscopic views, imaging studies, or stereoscopic views. The patient’s live data does not include hidden or subsurface tissue or structures, or hidden tissues or structures which can only be viewed with the aid of a computer monitor.

The terms surgical instrument, actual surgical tool, surgical instrument, and physical surgical device are interchangeable throughout the application. Virtual surgical instruments, real surgical instruments, actual surgical instruments, surgical instrument, and physical surgical tools do not count as surgical instruments. The physical surgical tools can include surgical instruments that are provided by vendors or manufacturers for spinal surgery, anterior spine fusion, pedicle screw installation, knee replacement, hip replacement, and/or shoulder replacement. They can also be used to cut blocks, pin guides and other surgical instruments such as reamers, impactors and broaches. You can re-use or discard physical surgical instruments or a combination of both. Patients can have their own surgical instruments. Virtual surgical instruments does not refer to actual surgical instruments, surgical instruments, or surgical instruments.

“Real surgical tool”, actual surgical instrument, physical tool, and surgical device are interchangeable throughout the application. Virtual surgical tools cannot be used with the terms real surgery tool, actual surgeon tool, physical tool, and surgical. Manufacturers or vendors can provide the physical surgical tools. The physical surgical tools could include pins, drills and saw blades as well as frames for tissue distracting. Virtual surgical tools does not refer to actual surgical tools, actual surgical tools, surgical instruments, or surgical tools.

“Real implant, implant component or actual implant, real implant, implant component, real implant, implant component, real implant, implant component, real implant, implant component, and physical implant, implant component are interchangeable throughout the application. The terms real implant, implant component or actual implant, actual or implant part, real implant, implant component, real implant, implant component, real implant, implant component, and physical implant, implant component do NOT include virtual implant and implant components. Physical implants and implant components may be provided by vendors or manufacturers. The physical surgical implants could include a pedicle screw or a spine rod or cage in a knee replacement. An acetabular cup in a hip replacement. A femoral stem or head in a hip replacement. Virtual implant or component is not the same as an actual implant, implant component, or physical implant.

“With surgical navigation, a virtual instrument that is a representation or a physical instrument can be displayed on a monitor. It can be tracked using navigation markers (e.g. Infrared and RF markers can be used to compare the positions and/or orientations of the first and second virtual instruments in surgical navigation. With surgical navigation, the positions and/or orientations for the first and second virtual instruments can be compared.

“Aspects” refer to systems, devices and methods that position a virtual path or plane, virtual tool, surgical instrument, or implant component in mixed reality environments using a head mounted display device. They can optionally be coupled to one or more processing unit.

A virtual surgical guide, tool or instrument can be superimposed on the surgical site, joint, or spine using mixed reality guidance. The OHMD can also project a virtual surgical guide, tool or instrument onto the implant. Guidance in mixed reality environments does not require the use of multiple virtual representations. It does not have to compare the positions or orientations of all the virtual representations.

The OHMD may display one or more virtual surgical instruments, including a virtual surgical knife or cut block, or a predetermined starting point. Rotation axis. Flexion axis. Extension axis. Predetermined axis for the virtual surgical instrument, virtual surgical instrument, or surgical instruments, or both.

“Anything that relates to a location, orientation, alignment or direction can be predetermined by using pre-operative images, data, measurements, intraoperative data and/or intraoperative measurements.

“Any one of the following: position, orientation and alignment, sagittal and coronal plane alignments, angle of implant component, rotation, slope, retroversion, offset, anteversion and retroversion. Position, location and orientation relative to anatomic landmarks. Position, location and orientation relative to anatomic axes. Position, location and orientation relative to anatomic axes. Pre-operative imaging studies. Pre-operative data. Intra-operative data. Measurements can be taken intra-operatively for registration purposes, e.g. a joint, a spinal, a surgical site or a bone.

“In certain embodiments, multiple coordinates systems can be used in lieu of a shared or common coordinate system. Coordinate transfers can be used to transfer coordinates from one coordinate system into another, such as for registering an OHMD, live patient data, including surgical site, virtual tools, and/or physical implants, or for registering the OHMD.

“Optical Head Mounted Displays.”

“In some embodiments, a pair or more of glasses are used. An optical head-mounted display can be included in the glasses. An optical head-mounted device (OHMD), is a wearable display with the ability to reflect projected images and allow the user to see through them. You can use a variety of OHMDs to practice your invention. You can use curved mirrors or curved combiners OHMDs, as well as light-guide and wave-guide OHMDs to practice the invention. Optionally, the OHMDs may use holographic optics or diffraction optics.

Traditional input devices can be used with OHMDs. These include buttons and touchpads, smartphones controllers, speech recognition and gesture recognition, as well as smartphone controllers. Advanced interfaces, such as. A brain-computer interface.

“Optionally, a server, computer or workstation can transmit data the OHMD. Data transmission can be done via Bluetooth, WiFi or optical signals. Virtual data can be displayed by the OHMD, e.g. Virtual data can be displayed by the OHMD in either uncompressed or compressed form. Optionally, virtual data of a patient may be reduced in resolution by transmitting it to the OHMD.

“Virtual data can be compressed during transmission to the OHMD. The OHMD may then choose to decompress them, so that the OHMD displays uncompressed virtual data.

“Alternatively, virtual data can be transmitted to the OHMD with a reduced resolution, such as by increasing the thickness of the image data before transmission. The OHMD may then increase the resolution by re-interpolating to reduce the image data’s thickness or by displaying virtual data with a higher resolution than the original data.

“In certain embodiments, the OHMD may transmit data back from a computer to a server, a workstation, or a computer. These data could include, but not be limited to:

“Radiofrequency tags can be used in the embodiments of active or passive type with or without a batteries.”

“Exemplary optical heads mounted displays include ODG R-7 and R-8 smart glasses from ODG, Osterhout Group in San Francisco, Calif.), NVIDIA 942 3-D Vision wireless glasses (NVIDIA Santa Clara, Calif.) or the Microsoft HoloLens Microsoft (Microsoft Redmond, Wis ).

Microsoft manufactures the Microsoft HoloLens. It’s a pair augmented reality smart glasses. Hololens are compatible with the Windows 10 operating systems. The Hololens’ front section includes sensors, hardware and several cameras. The projected images are displayed in the visor via a pair transparent combiner lenses. An integrated program that recognizes gestures allows the HoloLens to be adjusted for the IPD (interpupillary distance). Also integrated is a pair of speakers. These speakers can be used to hear virtual sounds, and they do not block external sounds. An integrated USB 2.0 micro-B socket is available. An audio jack of 3.5mm is also included.”

“Hololens contains an Intel Cherry Trail SoC that houses the GPU and CPU. HoloLens also includes a custom-made Microsoft Holographic Processing Unit. Each SoC and HPU have 1 GB LPDDR3 each and share 8MB SRAM. The SoC controls 64 GB of eMMC, and runs the Windows 10 operating systems. The HPU processes data from sensors and handles tasks like voice and speech recognition, spatial mapping, gesture recognition and gesture recognition. HoloLens features Bluetooth 4.1 Low Energy Bluetooth (LE) wireless connectivity and IEEE 802.11ac WiFi. The headset supports Bluetooth LE and can be connected to a Clicker (a finger-operating input device which can be used to select menus and functions).

There are many applications that Microsoft Hololens can use, including a catalog of holograms and HoloStudio. HoloStudio is a 3D modeling application from Microsoft with 3D printing capability. Autodesk Maya 3D creator application FreeForm integrates HoloLens with Autodesk Fusion 360 cloud-based 3-D development application.

“HoloLens that use the HPU can use sensual and natural interface commands?” voice, gesture, or gesture. Gaze commands, such as. Head-tracking allows users to focus on what they are seeing. An air tap can be used to select any virtual button or application. It is similar to clicking on a virtual mouse. To move a display, the tap can be held down for a drag simulation. Voice commands are also possible.

“The HoloLens Shell uses many components and concepts from Windows Desktop Environment. To open the main menu, perform a bloom gesture by opening your hand with the palm facing upward and spreading the fingers. Windows can be moved to a specific position, locked or resized. Virtual windows and menus can be fixed to physical objects or locations. Virtual windows and menus can move along with the user, or be fixed in relation to the user. They can also follow the user’s movements.

Developers can use the Microsoft HoloLens app for Windows 10 PCs and Windows 10 Mobile to create apps, view live streams from the HoloLens user and capture augmented reality photos.

Hololens can run almost all Universal Windows Platform apps. These apps can also be projected in 2D. HoloLens currently supports a select number of Windows 10 APIs. Hololens apps are also possible to be created on Windows 10 PCs. Holographic apps can be developed using Windows Holographic APIs. Unity and Vuforia can both be used. DirectX and Windows APIs can be used to develop applications.”

“Computer Graphics Seeing Pipeline”

“In some embodiments, the optical headmount display uses a computer graphic viewing pipeline to display 3D objects or two-dimensional objects in 3D space. 16B:”

“Registration:”

The OHMD computer graphic system will display different objects, such as virtual anatomical models, models of instruments and guides. Each object is initially defined in its own model coordinate system. Each object’s spatial relationships are established during registration. Then, each object is transformed from its model coordinate system to a common global coordinate scheme. The registration process can use different techniques as described below.

The environment defines the global coordinate system for augmented reality OHMDs. It superimposes computer-generated objects on top of live views of the physical world. Spatial mapping is a process that creates a computer representation from the environment. This allows for registration and merging with computer-generated items, thus defining a spatial relationship between computer-generated objects.

“View Projection:”

Once all objects are registered and converted into the common global coordinate systems, they can be viewed on a display. This is done by translating their coordinates from global coordinate system to the view coordinate system and then projecting them onto a display plane. To define the transformations in this step, the view projection step uses the view direction and viewpoint. Two view projections are possible for stereoscopic displays such as an OHMD. One for each eye. To correctly superimpose computer-generated objects on the environment, it is necessary to know the location of the viewpoint relative to the environment. The view projections are automatically updated to reflect the new view as the viewpoint or view direction changes, such as head movement.

“Eye Tracking Systems.”

“The invention provides methods for using the human eye, including lid movements and eye movements, as well as movements that are induced by the orbital muscles to execute computer commands. The invention also provides methods for executing computer commands via facial movements and head movements.

“Command execution can be induced both by eye movements and lid movements, as well as movements induced from the peri-orbital muscle movements, facial movements, and head movements. This is useful in situations where operators don’t have access to their hands to type or execute commands on a touchpad. These situations can be used for industrial purposes, including aircraft and automotive manufacturing, as well as medical and surgical procedures.

“In certain embodiments, an optical head mount display may include an eye tracking device. There are many types of eye tracking systems that can be used. These examples are not intended to limit the invention. Any eye tracking system that is known in the art can now be used.”

“Eye movement can also be broken down into fixations or saccades. When the eye gaze is in one position and then moves to another, it’s called a fixation. A scan path is a series of fixations or saccades that result in a series of fixations and/or saccades. The visual information from the central one to two degrees of the visual angle provides the most visual information. Information from the peripheral is less useful. The locations of fixations along scan paths show which information was processed during an eye tracking session.

Eye trackers can measure the rotation and movement of the eye in a variety of ways. These include optical tracking with no direct contact to the eyes, measurement of electrical potentials using electrodes around the eyes, or measurement of movement of an object (such as a contact lens) attached to it.

An attachment to the eye can be, for example, a special contact lens equipped with an embedded mirror, magnetic field sensor, or a special contact lens. It is possible to measure the movement of the attachment by assuming that it doesn’t slip as the eye turns. The accuracy of measuring eye movement can be achieved by using tight fitting contact lenses. Magnetic search coils are also available that allow for measurement of eye movement in any direction, including vertical, horizontal and torsion.

Non-contact optical methods can also be used to measure eye movement. This technology allows light (optionally infrared) to be reflected from the eyes and can be detected by either an optical sensor, or a camera. This information can be used to measure eye movement and/or rotation due to changes in reflections. The corneal reflection (the first Purkinje photo) and the center part of the pupil can be tracked by optical sensor or video-based trackers. This information can also be used to track the eye over time. The dual-Purkinje eyetracker is a more sensitive version of this eye tracker. It uses reflections from both the front and back of the cornea (fourth Purkinje picture) as features to track. A more sensitive way to track features is to take images from the inside of the eye (first Purkinje image) and then follow the features as the eye moves or rotates. Gaze tracking can be done using optical methods, such as video recording or optical sensors.

“Optical or video-based eyetrackers may be used in some embodiments. A camera can focus on one or both of the eyes and track their movements as the viewer performs an operation, such as a surgery. For tracking, the eye-tracker may use the center portion of the pupil. To create corneal reflections, infrared and near-infrared noncollimated light can also be used. To calculate the point of regard on a surface, or the gaze direction, the vector between the pupil centre and the corneal reflections is used. A calibration procedure may be done at the beginning or end of the eye tracking.

It is possible to use both bright-pupil or dark-pupil eye trackers. The location of the illumination source relative to the optics is what determines the difference. If the illumination source is located coaxially relative to the optical path, the eye acts as retroreflective because the light reflects off of the retina creating a bright pupil effect that is similar to a normal red eye. The pupil will appear darkened if the illumination source is not in line with the optical path. This is because retroreflection is directed away to the camera or optical sensor.

Bright-pupil tracking has the advantage of greater iris/pupil contrast. This allows for more robust eye tracking with all types of iris pigmentation. It can reduce eyelash interference. It can be used to track in dark and bright lighting situations.

“Optical tracking can also include tracking the movement of the eye, including the pupil, as described above. The optical tracking method may also include tracking the movements of the eye lids, facial muscles, and periorbital muscles.

“In some embodiments the eye-tracking apparatus can be integrated into an optical head mounted display. Some embodiments allow head motion to be tracked simultaneously, such as using an array of accelerometers and/or gyroscopes that form an inertial measurement system (see below).

“In certain embodiments, the electric potentials can also be measured using electrodes placed around your eyes. If the eyes are closed, an electric potential field can be generated by them. It can be modeled that the electric potential field is generated by a dipole, with the positive pole at each eye and the negative pole at each retina. You can measure it by placing two electrodes around the eye. An electro-oculogram is a measurement of electric potentials.

“If the eyes move towards the periphery from the centre position, the retina will approach one electrode and the cornea will approach the opposite one. A change in the orientation and electric potential field causes a change in measured electro-oculogram signals. Analyzing such changes can help to assess eye movement. It is possible to identify two distinct movement directions: a horizontal or a vertical. A posterior skull electrode can measure an EOG component in the radial direction. This is usually the average of all the EOG channels that are referenced to the posterior brain electrode. Radial EOG channels can detect saccadic spike potents that originate from extra-ocular muscles during the onset saccades.

EOG can be used to detect gaze direction and slow eye movement. EOG can be used to detect blinks, and rapid or saccadic eyes movement that is associated with gaze shifts. EOG, unlike optical and video-based eye-trackers allows for the recording of eye movements with your eyes closed. EOG has a significant disadvantage in that it is not as accurate in determining gaze direction as an optical or video tracker. In certain embodiments of the invention, both optical and video tracking methods can be combined.

A sampling rate of 15, 20, 25, 30, 60, 50, 60, 100 or more can be used. You can use any sampling frequency. Many embodiments will prefer sampling rates higher than 30 Hz.

“Measuring Location, Orientation, Acceleration”

“The position, orientation, acceleration of the human skull, parts of the body, e.g. Hands, arms, legs, feet and other parts of the patient’s body can be measured, including their hands, arms, legs, feet, and legs. For example, the patient’s head, extremities (hip, knee, ankle and foot as well as the shoulder, elbow, wrist, and hand) can be measured using a combination of accelerometers and gyroscopes. Magnetometers can also be used in certain applications. These measurement systems that use any of these components are called inertial measurement units (IMU).

The term IMU, as used herein, refers to an electronic device capable of measuring and transmitting information about a body’s specific force and angular rate. It can also use a combination accelerometers and magnetometers. An IMU, or its components, can be registered or coupled with a navigation system. For example, an IMU can register a body or parts of a body in a shared coordinates system. An IMU can also be wireless, such as via WiFi networks or Bluetooth networks.

“Pairs can be extended to detect differences in accelerations between frames of reference associated with points. Both single- and multi-axis accelerometers can detect magnitude and direction. They can also be used to detect orientation (because of weight changes), coordinate acceleration (so that it produces g force or a change) and shock. In some cases, micromachined accelerometers may be used to determine the location of the device and the operator’s head.

“Piezoelectric and piezoresistive devices are available to convert mechanical motion into electrical signals. Piezoelectric accelerometers are based on single crystals or piezoceramics. Piezoresistive accelerometers may also be used. A silicon micro-machined sensing device is used in capacitive accelerometers.

“Accelerometers in some embodiments can use small micro-electro-mechanical systems (MEMS), consisting of, for instance, of a cantilever beam with proof mass.

“Optionally the accelerometer can also be integrated into the optical head mounted devices. Both the outputs of the eye tracking system as well as the accelerometer(s), can be used for command execution.”

“With an IMU, you can capture the following information about the operator and patient: Speed, Velocity and Acceleration. Position in space, Positional Change, Alignment and Orientation. Direction of movement (e.g. “Using sequential measurements

The IMU can transmit information about the operator and/or patient’s body parts to the IMU. These include but aren’t limited to: Head/chest, Trunk/shoulder, Elbow/Elbow, Wrist/Hand, Fingers, Arm/hip, Knee, Ankle/Foot, Toes/Leg, Inner organs, e.g. brain, heart, lungs, liver, spleen, bowel, bladder etc.”

There are many IMUs that can be placed on the OHMD and the operator, and/or the patient. These IMUs may be cross-referenced within a single or multiple coordinate system. An IMU is not required for a navigation system to be used with an OHMD. An OHMD can have navigation markers attached, including retroreflective markers, infrared markers and RF markers. Additionally, the OHMD can also be used to attach portions or segments of the patient’s anatomy or medical records. This allows the OHMD to be linked with the patient’s or patient’s anatomy. The OHMD, or operator wearing it, can also be registered in the one or more coordinate system used by the navigation system. Movements of the OHMD/Operator can also be registered in relation to the patient in these one or two coordinate systems. The OHMD and patient’s virtual data can be registered in the same coordinate system. IMUs, optical and navigation markers such as infrared and retroreflective markers and RF markers can all be used to register the virtual data and live data. Any change in the position of an OHMD relative to a patient can be used to move the virtual data. The visual image of the virtual patient and live data of a patient seen through the OHMD will always be aligned regardless of operator’s head or the OHMD. If multiple OHMDs (e.g. One for the primary surgeon, and additional ones, e.g. One for the primary surgeon and additional ones, e.g. These embodiments are possible because the IMUs and RF markers, RF markers, and/or infrared marker and/or navigation marks placed on the operator or patient, as well as any spatial anchors, can all be registered in the same coordinate systems as the primary OHMD. Each additional OHMD can have its own individual monitoring of its position, alignment, change in position, or orientation in relation to the patient,/or the surgical sites. This allows for the maintenance of alignment and/or superimpositions of corresponding structures in both the live data of patient and virtual data of patient for each additional OHMD, regardless of their relative position, orientation, or alignment with the patient,/or the surgical sites.

Referring to FIG. 1. A system 10 that can use multiple OHMDs 11, 13, 14, for multiple viewers, e.g. It shows a primary surgeon, second surgery, surgical assistant(s), and/or nurse(s). Multiple OHMDs can all be registered in a common coordinate scheme 15. This is done using anatomic structures and anatomic landmarks. It also uses optical markers, navigation markers, spatial anchors, and/or reference phantoms. The common coordinate system can also record pre-operative data 16 for the patient. Live data 18 can be retrieved from the surgical site. Live data 18 of the patient, such as from the surgical site, can be registered in the common coordinate system 15. You can register the live data 18 for the patient in the common coordinates system 15. The common coordinate system allows for intra-operative imaging studies 20 to be registered. OR references, e.g. OR references, e.g. A virtual surgical plan 24 can be created, modified or updated using pre-operative and live data 16, 18 or any combination thereof. The common coordinate system can register the virtual surgical plan 24. The OHMDs 11, 13, 14, can project digital holograms or virtual data into a left eye view using the view orientation and position of the left eyes 26. They can also project digital holograms or virtual data into a right eye view using the view orientation and view position of the right eyes 28. This creates a digital holographic shared experience 30. The surgeon can use a virtual interface 32 to execute commands. Display the next predetermined bone cutting, e.g. A virtual surgical plan, an imaging study, or intra-operative measurements can trigger the OHMDs 11, 13, 14, to project digital holograms for the next surgical step 34. These holograms are superimposed on and aligned with a surgical site in a predetermined orientation.

By registering live data (e.g., patient data), virtual data can be superimposed on live data of a patient for each viewer. The surgical field and the virtual data of each OHMD can be superimposed onto live data of the patient for each individual viewer by each individual OHMD for their respective view angles or perspectives by registering live data of the patient, e.g. Virtual data can be superimposed on and/or aligned to live patient data regardless of view angle or perspective. Alignment and/or superimposition can also be maintained while the viewer moves their head or body.

“Novel User Interfaces.”

“One subject of this invention is to provide an innovative user interface where the human eyes, including lid movements, including those induced by orbital and/orbital muscles and select skull muscles, are detected by the eye track system and processed to execute predefined computer commands.”

“Table 1” contains an exemplary list of eye movements, and lid movements, that the system can detect.

Summary for “Augmented Reality Guided Fitting, Sizing, Trialing, Balancing and Balance of Virtual Implants on the Physical Joint of a Patient for Manual and Robot-Assisted Joint Replacement”

“Computer assisted surgery, e.g. Pre-operative imaging can be used for robotics or surgical navigation. An external monitor can display the imaging studies and the patient’s anatomy. The information on the monitor can also be used to register landmarks. Hand-eye coordination can prove difficult for surgeons because the surgical field is at a different location. The surgeon has to view the information on the monitor from a different coordinate system.

“Aspects include the ability to simultaneously visualize live data, such as the patient’s spine or joint. A digital representation of the patient’s spine and joint and virtual data, such as virtual cuts and/or surgical guides that include cut blocks and drilling guides, can be viewed through an optical head mounted monitor (OHMD). Some embodiments register the surgical site, including the live data of patient and the OHMD, along with the virtual data in a common coordinate scheme. Some embodiments overlay the virtual data onto the patient’s live data and align them with it. The OHMD is a virtual reality head system that blends live data. However, it allows the surgeon to view the live data of the patient. The surgeon can view the surgical field while simultaneously observing virtual data, such as the patient’s position or orientation, and/or virtual surgical instruments and implants.

“Aspects describe novel devices that allow for visual guidance with an optical head mounted monitor to perform a surgical step. By displaying virtual representations one or more of a Virtual Surgical Tool, Virtual Implant Component, Virtual Implant or Virtual Device, a predetermined starting point, predetermined position, and predetermined orientation or Alignment, Predetermined Intermediate Point(s), Predetermined Intermediate Position(s), Predetermined End Position, and predetermined End orientation or Alignment, Predetermined Path, Predetermined Plan, or Predetermined Cut Plan, predetermined contour, outline, cross-section, or surface features, or projection, or predetermined depth marker, or rotation marker, or angle, or orientation, or Rot Marker, or rotation marker, or predetermined axis Rotation axis. Flexion axis. Extension axis. Predetermined axis.

“Aspects relate to a device with at least one optical head mounted LCD, which is configured to generate virtual surgical guides. The virtual surgical guide can be a digital representation of three dimensions that corresponds to at most one of the following: a portion of a surgical guide, a placement indicator for a surgical guide or a combination thereof. One embodiment of the inventions includes at least one optically mounted display that displays the virtual surgery guide superimposed on a physical joint. The coordinates of the predetermined position of virtual surgical guides are used to determine the alignment of the virtual surgeon guide and a saw blade to guide the bone cutting of the joint.

“In some embodiments, the device includes one, two, or three optical head mounted displays.”

“In certain embodiments, the virtual surgeon guide can be used to guide a bone in a hip replacement, knee replacement, shoulder replacement or ankle replacement.

“In certain embodiments, the virtual surgical guides include a virtual slot to allow for either a virtual or physical saw blade.”

“In certain embodiments, the virtual surgical guides include a planar area that allows for alignment of a virtual or physical saw blade.”

“In some embodiments, a virtual surgical guide may include two or more virtual guides holes or paths that allow for the alignment of two or more physical drills and pins.”

“In some embodiments, predetermined positions of the virtual surgical guides include anatomical and/or alignment information for the joint. An example of this is the anatomical and/or alignment information for the joint. It can be based on coordinates of a joint, anatomical axis or biomechanical axis. Or combinations of these.

“In some embodiments, at least one optical head mounted monitor is used to align the virtual surgery guide based upon a predetermined alignment of limbs. The predetermined limb alignment could be, for example, a normal mechanical alignment of a leg.

“In some embodiments, at least one optical mount mounted display is used to align the virtual surgery guide based upon a predetermined rotation of a femoral component or tibial constituent. Some embodiments have at least one optical mount mounted display that is capable of aligning the virtual surgical guide using a predetermined flexion or slope of a component of the femoral.

“In certain embodiments, the virtual surgeon guide can be used to guide a cut in the proximal foemoral bone based on a predetermined leg size.”

“In some embodiments, a virtual surgical guide can be used to guide a bone cutting of a distal toibia or a trochanter in an ankle joint replacement. The at least one optical mount display is configured for aligning the virtual surgical guides based on a predetermined alignment of the ankle, which includes a coronal, sagittal, and axial plane component alignments, as well as an implant component rotation, or combinations thereof.

“In some embodiments, a virtual surgical guide can be used to guide a bone cutting of a proximal hip in a shoulder replacement. The at least one optical-head mounted display is designed to align the virtual surgery guide according to a predetermined alignment of the humeral implants component. This alignment may include a coronal, sagittal, and axial plane alignments for implant components, as well as combinations thereof.

“In some embodiments, pre-operative or intraoperative imaging studies, one or more intraoperative measurements, or combinations thereof, determine the surgical guide’s predetermined position.”

“Aspects” of the invention concern a device that includes two or more optical heads mounted displays for two or multiple users. The device is designed to generate a virtual surgery guide. This virtual surgical guidance is a three-dimensional representation of a surgical guide in digital format. It is superimposed on a physical joint using coordinates of a predetermined location of the virtual guide. The virtual surgical guides are configured for alignment of the physical surgical and a saw blade to guide the bone cut.

Aspects of this invention concern a device that includes at least one optical-head mounted display and a virtual bony cut plane. The virtual bone cutting plane is used to guide a joint’s bone cuts. The optical head mounted LCD is designed to overlay the virtual bony cut plane onto a physical joint using coordinates from the predetermined location of the virtual bones cut plane. The virtual bone cutting plane can be used to guide a bone in predetermined orientations such as a predetermined varus, valgus, or a predetermined slope of the tibia or a predetermined amount of flexion in an implant component.

One aspect of the invention concerns a method for preparing a joint in a patient for a prosthesis. The method may include registering one of more optical head-mounted displays worn by a surgeon, surgical assistant, in a coordinate systems, taking one or several intra-operative measures from the patient to determine one, or more, intra-operative coordinates. Using the coordinates of predetermined positions of the virtual guide, creating a virtual surgery guide and superimposing it onto the patient’s joint using the one, more or all of the optical head-mounted displays.

“In some embodiments, one or more optical head mounted display are registered in a shared coordinate system. The common coordinate system may be a shared coordinate scheme in some embodiments.

“In certain embodiments, the virtual surgeon guide can be used to guide a bone in a hip replacement, knee replacement, shoulder replacement, ankle replacement or joint replacement.

“In some embodiments, the predetermined location of the virtual surgical guidance determines the tibial slope to implantation of one or several tibial component components in a knee substitute. Some embodiments allow the angle of varus and valgus correction to be determined by the predetermined position.

“In certain embodiments, the virtual surgery guide corresponds with a physical distal fomoral guide or cutblock and the predetermined location of the virtual guide determines a component flexion.”

“In certain embodiments, the virtual surgeon guide corresponds with a physical anterior/poster femoral surgical block or cut block. The predetermined position and orientation of the virtual guide determines a rotation of a femoral part.”

“In certain embodiments, the virtual surgeon guide corresponds with a physical cut block or chamfer femoralguide.”

“In certain embodiments, the virtual surgeon guide corresponds with a physical multi-cut cut femoral guide/cut block. The predetermined position determines one or more of an posterior cut, anterior cut, chamfer cut and a rotation of a femoral part.”

“In some embodiments, a virtual surgical guide is used for a hip replacement. The predetermined position determines the leg length after implantation. The virtual surgical guide can be used to align the blade of the saw to guide the bone cutting.

“In some embodiments, one or more intraoperative measures include detecting one, more, or all of the following: a patient’s joint, an operating room table, fixed structures, or combinations thereof. One or more cameras, image capture, or video capture systems integrated in the optical head mounted monitor detect one or several optical markers, including their coordinates (xy, z), and at least one of a position or orientation, alignment or direction of movement.

“In certain embodiments, the use of spatial mapping techniques can be used to register one or more optical head mounted displays or surgical site, joint or spine, surgical instruments, implant components, or surgical instruments.”

“In some cases, depth sensors can be used to register one or more optical head mounted displays, surgical sites, joints, spines, surgical instruments, or implant components.”

“In some embodiments, the virtual surgery guide is used for guiding a bone cut from a distal tip of a tibia or the talus in an an ankle joint replacement. The one or more optical heads mounted displays are used to align the virtual surgeon’s guide based upon a predetermined alignment of tibial and talar implants component alignments. This alignment includes a coronal, sagittal, and axial plane component alignments. An implant component rotation can also be performed

“In some embodiments, the virtual surgery guide is used for guiding a bone cut from a proximal hip in a shoulder replacement. The one or more optical heads mounted displays are used to align the virtual surgeon based on a predetermined alignment of the humeral implants component. This alignment may include a coronal, sagittal, and axial plane component alignments. The invention relates to a system that includes at least 1 optical head mounted monitor and a virtual database of implants. In this embodiment, the virtual surgical guide is used to align the virtual component with the tissue that will be used to place the implant components.

“Various exemplary embodiments of the invention will be described in detail hereinafter, with reference to the accompanying illustrations, which include some examples. However, the present inventive concept can be implemented in many other forms, so it should not be taken to mean that only the examples shown herein are all possible. These example embodiments are intended to demonstrate the scope of the inventive concept and be comprehensive and complete. The relative sizes of layers or regions in the drawings may be exaggerated to increase clarity. “Like numerals” refer to the same elements in all of the drawings.

The term “live data” of the patient refers to the surgical site, anatomy and any tissues or anatomic structures, pathology, pathology, pathology, pathologic structures, tissues, and/or pathology as it is seen by the surgeon’s eyes. This includes information not obtained from virtual data, stereoscopic views, imaging studies, or stereoscopic views. The patient’s live data does not include hidden or subsurface tissue or structures, or hidden tissues or structures which can only be viewed with the aid of a computer monitor.

The terms surgical instrument, actual surgical tool, surgical instrument, and physical surgical device are interchangeable throughout the application. Virtual surgical instruments, real surgical instruments, actual surgical instruments, surgical instrument, and physical surgical tools do not count as surgical instruments. The physical surgical tools can include surgical instruments that are provided by vendors or manufacturers for spinal surgery, anterior spine fusion, pedicle screw installation, knee replacement, hip replacement, and/or shoulder replacement. They can also be used to cut blocks, pin guides and other surgical instruments such as reamers, impactors and broaches. You can re-use or discard physical surgical instruments or a combination of both. Patients can have their own surgical instruments. Virtual surgical instruments does not refer to actual surgical instruments, surgical instruments, or surgical instruments.

“Real surgical tool”, actual surgical instrument, physical tool, and surgical device are interchangeable throughout the application. Virtual surgical tools cannot be used with the terms real surgery tool, actual surgeon tool, physical tool, and surgical. Manufacturers or vendors can provide the physical surgical tools. The physical surgical tools could include pins, drills and saw blades as well as frames for tissue distracting. Virtual surgical tools does not refer to actual surgical tools, actual surgical tools, surgical instruments, or surgical tools.

“Real implant, implant component or actual implant, real implant, implant component, real implant, implant component, real implant, implant component, real implant, implant component, and physical implant, implant component are interchangeable throughout the application. The terms real implant, implant component or actual implant, actual or implant part, real implant, implant component, real implant, implant component, real implant, implant component, and physical implant, implant component do NOT include virtual implant and implant components. Physical implants and implant components may be provided by vendors or manufacturers. The physical surgical implants could include a pedicle screw or a spine rod or cage in a knee replacement. An acetabular cup in a hip replacement. A femoral stem or head in a hip replacement. Virtual implant or component is not the same as an actual implant, implant component, or physical implant.

“With surgical navigation, a virtual instrument that is a representation or a physical instrument can be displayed on a monitor. It can be tracked using navigation markers (e.g. Infrared and RF markers can be used to compare the positions and/or orientations of the first and second virtual instruments in surgical navigation. With surgical navigation, the positions and/or orientations for the first and second virtual instruments can be compared.

“Aspects” refer to systems, devices and methods that position a virtual path or plane, virtual tool, surgical instrument, or implant component in mixed reality environments using a head mounted display device. They can optionally be coupled to one or more processing unit.

A virtual surgical guide, tool or instrument can be superimposed on the surgical site, joint, or spine using mixed reality guidance. The OHMD can also project a virtual surgical guide, tool or instrument onto the implant. Guidance in mixed reality environments does not require the use of multiple virtual representations. It does not have to compare the positions or orientations of all the virtual representations.

The OHMD may display one or more virtual surgical instruments, including a virtual surgical knife or cut block, or a predetermined starting point. Rotation axis. Flexion axis. Extension axis. Predetermined axis for the virtual surgical instrument, virtual surgical instrument, or surgical instruments, or both.

“Anything that relates to a location, orientation, alignment or direction can be predetermined by using pre-operative images, data, measurements, intraoperative data and/or intraoperative measurements.

“Any one of the following: position, orientation and alignment, sagittal and coronal plane alignments, angle of implant component, rotation, slope, retroversion, offset, anteversion and retroversion. Position, location and orientation relative to anatomic landmarks. Position, location and orientation relative to anatomic axes. Position, location and orientation relative to anatomic axes. Pre-operative imaging studies. Pre-operative data. Intra-operative data. Measurements can be taken intra-operatively for registration purposes, e.g. a joint, a spinal, a surgical site or a bone.

“In certain embodiments, multiple coordinates systems can be used in lieu of a shared or common coordinate system. Coordinate transfers can be used to transfer coordinates from one coordinate system into another, such as for registering an OHMD, live patient data, including surgical site, virtual tools, and/or physical implants, or for registering the OHMD.

“Optical Head Mounted Displays.”

“In some embodiments, a pair or more of glasses are used. An optical head-mounted display can be included in the glasses. An optical head-mounted device (OHMD), is a wearable display with the ability to reflect projected images and allow the user to see through them. You can use a variety of OHMDs to practice your invention. You can use curved mirrors or curved combiners OHMDs, as well as light-guide and wave-guide OHMDs to practice the invention. Optionally, the OHMDs may use holographic optics or diffraction optics.

Traditional input devices can be used with OHMDs. These include buttons and touchpads, smartphones controllers, speech recognition and gesture recognition, as well as smartphone controllers. Advanced interfaces, such as. A brain-computer interface.

“Optionally, a server, computer or workstation can transmit data the OHMD. Data transmission can be done via Bluetooth, WiFi or optical signals. Virtual data can be displayed by the OHMD, e.g. Virtual data can be displayed by the OHMD in either uncompressed or compressed form. Optionally, virtual data of a patient may be reduced in resolution by transmitting it to the OHMD.

“Virtual data can be compressed during transmission to the OHMD. The OHMD may then choose to decompress them, so that the OHMD displays uncompressed virtual data.

“Alternatively, virtual data can be transmitted to the OHMD with a reduced resolution, such as by increasing the thickness of the image data before transmission. The OHMD may then increase the resolution by re-interpolating to reduce the image data’s thickness or by displaying virtual data with a higher resolution than the original data.

“In certain embodiments, the OHMD may transmit data back from a computer to a server, a workstation, or a computer. These data could include, but not be limited to:

“Radiofrequency tags can be used in the embodiments of active or passive type with or without a batteries.”

“Exemplary optical heads mounted displays include ODG R-7 and R-8 smart glasses from ODG, Osterhout Group in San Francisco, Calif.), NVIDIA 942 3-D Vision wireless glasses (NVIDIA Santa Clara, Calif.) or the Microsoft HoloLens Microsoft (Microsoft Redmond, Wis ).

Microsoft manufactures the Microsoft HoloLens. It’s a pair augmented reality smart glasses. Hololens are compatible with the Windows 10 operating systems. The Hololens’ front section includes sensors, hardware and several cameras. The projected images are displayed in the visor via a pair transparent combiner lenses. An integrated program that recognizes gestures allows the HoloLens to be adjusted for the IPD (interpupillary distance). Also integrated is a pair of speakers. These speakers can be used to hear virtual sounds, and they do not block external sounds. An integrated USB 2.0 micro-B socket is available. An audio jack of 3.5mm is also included.”

“Hololens contains an Intel Cherry Trail SoC that houses the GPU and CPU. HoloLens also includes a custom-made Microsoft Holographic Processing Unit. Each SoC and HPU have 1 GB LPDDR3 each and share 8MB SRAM. The SoC controls 64 GB of eMMC, and runs the Windows 10 operating systems. The HPU processes data from sensors and handles tasks like voice and speech recognition, spatial mapping, gesture recognition and gesture recognition. HoloLens features Bluetooth 4.1 Low Energy Bluetooth (LE) wireless connectivity and IEEE 802.11ac WiFi. The headset supports Bluetooth LE and can be connected to a Clicker (a finger-operating input device which can be used to select menus and functions).

There are many applications that Microsoft Hololens can use, including a catalog of holograms and HoloStudio. HoloStudio is a 3D modeling application from Microsoft with 3D printing capability. Autodesk Maya 3D creator application FreeForm integrates HoloLens with Autodesk Fusion 360 cloud-based 3-D development application.

“HoloLens that use the HPU can use sensual and natural interface commands?” voice, gesture, or gesture. Gaze commands, such as. Head-tracking allows users to focus on what they are seeing. An air tap can be used to select any virtual button or application. It is similar to clicking on a virtual mouse. To move a display, the tap can be held down for a drag simulation. Voice commands are also possible.

“The HoloLens Shell uses many components and concepts from Windows Desktop Environment. To open the main menu, perform a bloom gesture by opening your hand with the palm facing upward and spreading the fingers. Windows can be moved to a specific position, locked or resized. Virtual windows and menus can be fixed to physical objects or locations. Virtual windows and menus can move along with the user, or be fixed in relation to the user. They can also follow the user’s movements.

Developers can use the Microsoft HoloLens app for Windows 10 PCs and Windows 10 Mobile to create apps, view live streams from the HoloLens user and capture augmented reality photos.

Hololens can run almost all Universal Windows Platform apps. These apps can also be projected in 2D. HoloLens currently supports a select number of Windows 10 APIs. Hololens apps are also possible to be created on Windows 10 PCs. Holographic apps can be developed using Windows Holographic APIs. Unity and Vuforia can both be used. DirectX and Windows APIs can be used to develop applications.”

“Computer Graphics Seeing Pipeline”

“In some embodiments, the optical headmount display uses a computer graphic viewing pipeline to display 3D objects or two-dimensional objects in 3D space. 16B:”

“Registration:”

The OHMD computer graphic system will display different objects, such as virtual anatomical models, models of instruments and guides. Each object is initially defined in its own model coordinate system. Each object’s spatial relationships are established during registration. Then, each object is transformed from its model coordinate system to a common global coordinate scheme. The registration process can use different techniques as described below.

The environment defines the global coordinate system for augmented reality OHMDs. It superimposes computer-generated objects on top of live views of the physical world. Spatial mapping is a process that creates a computer representation from the environment. This allows for registration and merging with computer-generated items, thus defining a spatial relationship between computer-generated objects.

“View Projection:”

Once all objects are registered and converted into the common global coordinate systems, they can be viewed on a display. This is done by translating their coordinates from global coordinate system to the view coordinate system and then projecting them onto a display plane. To define the transformations in this step, the view projection step uses the view direction and viewpoint. Two view projections are possible for stereoscopic displays such as an OHMD. One for each eye. To correctly superimpose computer-generated objects on the environment, it is necessary to know the location of the viewpoint relative to the environment. The view projections are automatically updated to reflect the new view as the viewpoint or view direction changes, such as head movement.

“Eye Tracking Systems.”

“The invention provides methods for using the human eye, including lid movements and eye movements, as well as movements that are induced by the orbital muscles to execute computer commands. The invention also provides methods for executing computer commands via facial movements and head movements.

“Command execution can be induced both by eye movements and lid movements, as well as movements induced from the peri-orbital muscle movements, facial movements, and head movements. This is useful in situations where operators don’t have access to their hands to type or execute commands on a touchpad. These situations can be used for industrial purposes, including aircraft and automotive manufacturing, as well as medical and surgical procedures.

“In certain embodiments, an optical head mount display may include an eye tracking device. There are many types of eye tracking systems that can be used. These examples are not intended to limit the invention. Any eye tracking system that is known in the art can now be used.”

“Eye movement can also be broken down into fixations or saccades. When the eye gaze is in one position and then moves to another, it’s called a fixation. A scan path is a series of fixations or saccades that result in a series of fixations and/or saccades. The visual information from the central one to two degrees of the visual angle provides the most visual information. Information from the peripheral is less useful. The locations of fixations along scan paths show which information was processed during an eye tracking session.

Eye trackers can measure the rotation and movement of the eye in a variety of ways. These include optical tracking with no direct contact to the eyes, measurement of electrical potentials using electrodes around the eyes, or measurement of movement of an object (such as a contact lens) attached to it.

An attachment to the eye can be, for example, a special contact lens equipped with an embedded mirror, magnetic field sensor, or a special contact lens. It is possible to measure the movement of the attachment by assuming that it doesn’t slip as the eye turns. The accuracy of measuring eye movement can be achieved by using tight fitting contact lenses. Magnetic search coils are also available that allow for measurement of eye movement in any direction, including vertical, horizontal and torsion.

Non-contact optical methods can also be used to measure eye movement. This technology allows light (optionally infrared) to be reflected from the eyes and can be detected by either an optical sensor, or a camera. This information can be used to measure eye movement and/or rotation due to changes in reflections. The corneal reflection (the first Purkinje photo) and the center part of the pupil can be tracked by optical sensor or video-based trackers. This information can also be used to track the eye over time. The dual-Purkinje eyetracker is a more sensitive version of this eye tracker. It uses reflections from both the front and back of the cornea (fourth Purkinje picture) as features to track. A more sensitive way to track features is to take images from the inside of the eye (first Purkinje image) and then follow the features as the eye moves or rotates. Gaze tracking can be done using optical methods, such as video recording or optical sensors.

“Optical or video-based eyetrackers may be used in some embodiments. A camera can focus on one or both of the eyes and track their movements as the viewer performs an operation, such as a surgery. For tracking, the eye-tracker may use the center portion of the pupil. To create corneal reflections, infrared and near-infrared noncollimated light can also be used. To calculate the point of regard on a surface, or the gaze direction, the vector between the pupil centre and the corneal reflections is used. A calibration procedure may be done at the beginning or end of the eye tracking.

It is possible to use both bright-pupil or dark-pupil eye trackers. The location of the illumination source relative to the optics is what determines the difference. If the illumination source is located coaxially relative to the optical path, the eye acts as retroreflective because the light reflects off of the retina creating a bright pupil effect that is similar to a normal red eye. The pupil will appear darkened if the illumination source is not in line with the optical path. This is because retroreflection is directed away to the camera or optical sensor.

Bright-pupil tracking has the advantage of greater iris/pupil contrast. This allows for more robust eye tracking with all types of iris pigmentation. It can reduce eyelash interference. It can be used to track in dark and bright lighting situations.

“Optical tracking can also include tracking the movement of the eye, including the pupil, as described above. The optical tracking method may also include tracking the movements of the eye lids, facial muscles, and periorbital muscles.

“In some embodiments the eye-tracking apparatus can be integrated into an optical head mounted display. Some embodiments allow head motion to be tracked simultaneously, such as using an array of accelerometers and/or gyroscopes that form an inertial measurement system (see below).

“In certain embodiments, the electric potentials can also be measured using electrodes placed around your eyes. If the eyes are closed, an electric potential field can be generated by them. It can be modeled that the electric potential field is generated by a dipole, with the positive pole at each eye and the negative pole at each retina. You can measure it by placing two electrodes around the eye. An electro-oculogram is a measurement of electric potentials.

“If the eyes move towards the periphery from the centre position, the retina will approach one electrode and the cornea will approach the opposite one. A change in the orientation and electric potential field causes a change in measured electro-oculogram signals. Analyzing such changes can help to assess eye movement. It is possible to identify two distinct movement directions: a horizontal or a vertical. A posterior skull electrode can measure an EOG component in the radial direction. This is usually the average of all the EOG channels that are referenced to the posterior brain electrode. Radial EOG channels can detect saccadic spike potents that originate from extra-ocular muscles during the onset saccades.

EOG can be used to detect gaze direction and slow eye movement. EOG can be used to detect blinks, and rapid or saccadic eyes movement that is associated with gaze shifts. EOG, unlike optical and video-based eye-trackers allows for the recording of eye movements with your eyes closed. EOG has a significant disadvantage in that it is not as accurate in determining gaze direction as an optical or video tracker. In certain embodiments of the invention, both optical and video tracking methods can be combined.

A sampling rate of 15, 20, 25, 30, 60, 50, 60, 100 or more can be used. You can use any sampling frequency. Many embodiments will prefer sampling rates higher than 30 Hz.

“Measuring Location, Orientation, Acceleration”

“The position, orientation, acceleration of the human skull, parts of the body, e.g. Hands, arms, legs, feet and other parts of the patient’s body can be measured, including their hands, arms, legs, feet, and legs. For example, the patient’s head, extremities (hip, knee, ankle and foot as well as the shoulder, elbow, wrist, and hand) can be measured using a combination of accelerometers and gyroscopes. Magnetometers can also be used in certain applications. These measurement systems that use any of these components are called inertial measurement units (IMU).

The term IMU, as used herein, refers to an electronic device capable of measuring and transmitting information about a body’s specific force and angular rate. It can also use a combination accelerometers and magnetometers. An IMU, or its components, can be registered or coupled with a navigation system. For example, an IMU can register a body or parts of a body in a shared coordinates system. An IMU can also be wireless, such as via WiFi networks or Bluetooth networks.

“Pairs can be extended to detect differences in accelerations between frames of reference associated with points. Both single- and multi-axis accelerometers can detect magnitude and direction. They can also be used to detect orientation (because of weight changes), coordinate acceleration (so that it produces g force or a change) and shock. In some cases, micromachined accelerometers may be used to determine the location of the device and the operator’s head.

“Piezoelectric and piezoresistive devices are available to convert mechanical motion into electrical signals. Piezoelectric accelerometers are based on single crystals or piezoceramics. Piezoresistive accelerometers may also be used. A silicon micro-machined sensing device is used in capacitive accelerometers.

“Accelerometers in some embodiments can use small micro-electro-mechanical systems (MEMS), consisting of, for instance, of a cantilever beam with proof mass.

“Optionally the accelerometer can also be integrated into the optical head mounted devices. Both the outputs of the eye tracking system as well as the accelerometer(s), can be used for command execution.”

“With an IMU, you can capture the following information about the operator and patient: Speed, Velocity and Acceleration. Position in space, Positional Change, Alignment and Orientation. Direction of movement (e.g. “Using sequential measurements

The IMU can transmit information about the operator and/or patient’s body parts to the IMU. These include but aren’t limited to: Head/chest, Trunk/shoulder, Elbow/Elbow, Wrist/Hand, Fingers, Arm/hip, Knee, Ankle/Foot, Toes/Leg, Inner organs, e.g. brain, heart, lungs, liver, spleen, bowel, bladder etc.”

There are many IMUs that can be placed on the OHMD and the operator, and/or the patient. These IMUs may be cross-referenced within a single or multiple coordinate system. An IMU is not required for a navigation system to be used with an OHMD. An OHMD can have navigation markers attached, including retroreflective markers, infrared markers and RF markers. Additionally, the OHMD can also be used to attach portions or segments of the patient’s anatomy or medical records. This allows the OHMD to be linked with the patient’s or patient’s anatomy. The OHMD, or operator wearing it, can also be registered in the one or more coordinate system used by the navigation system. Movements of the OHMD/Operator can also be registered in relation to the patient in these one or two coordinate systems. The OHMD and patient’s virtual data can be registered in the same coordinate system. IMUs, optical and navigation markers such as infrared and retroreflective markers and RF markers can all be used to register the virtual data and live data. Any change in the position of an OHMD relative to a patient can be used to move the virtual data. The visual image of the virtual patient and live data of a patient seen through the OHMD will always be aligned regardless of operator’s head or the OHMD. If multiple OHMDs (e.g. One for the primary surgeon, and additional ones, e.g. One for the primary surgeon and additional ones, e.g. These embodiments are possible because the IMUs and RF markers, RF markers, and/or infrared marker and/or navigation marks placed on the operator or patient, as well as any spatial anchors, can all be registered in the same coordinate systems as the primary OHMD. Each additional OHMD can have its own individual monitoring of its position, alignment, change in position, or orientation in relation to the patient,/or the surgical sites. This allows for the maintenance of alignment and/or superimpositions of corresponding structures in both the live data of patient and virtual data of patient for each additional OHMD, regardless of their relative position, orientation, or alignment with the patient,/or the surgical sites.

Referring to FIG. 1. A system 10 that can use multiple OHMDs 11, 13, 14, for multiple viewers, e.g. It shows a primary surgeon, second surgery, surgical assistant(s), and/or nurse(s). Multiple OHMDs can all be registered in a common coordinate scheme 15. This is done using anatomic structures and anatomic landmarks. It also uses optical markers, navigation markers, spatial anchors, and/or reference phantoms. The common coordinate system can also record pre-operative data 16 for the patient. Live data 18 can be retrieved from the surgical site. Live data 18 of the patient, such as from the surgical site, can be registered in the common coordinate system 15. You can register the live data 18 for the patient in the common coordinates system 15. The common coordinate system allows for intra-operative imaging studies 20 to be registered. OR references, e.g. OR references, e.g. A virtual surgical plan 24 can be created, modified or updated using pre-operative and live data 16, 18 or any combination thereof. The common coordinate system can register the virtual surgical plan 24. The OHMDs 11, 13, 14, can project digital holograms or virtual data into a left eye view using the view orientation and position of the left eyes 26. They can also project digital holograms or virtual data into a right eye view using the view orientation and view position of the right eyes 28. This creates a digital holographic shared experience 30. The surgeon can use a virtual interface 32 to execute commands. Display the next predetermined bone cutting, e.g. A virtual surgical plan, an imaging study, or intra-operative measurements can trigger the OHMDs 11, 13, 14, to project digital holograms for the next surgical step 34. These holograms are superimposed on and aligned with a surgical site in a predetermined orientation.

By registering live data (e.g., patient data), virtual data can be superimposed on live data of a patient for each viewer. The surgical field and the virtual data of each OHMD can be superimposed onto live data of the patient for each individual viewer by each individual OHMD for their respective view angles or perspectives by registering live data of the patient, e.g. Virtual data can be superimposed on and/or aligned to live patient data regardless of view angle or perspective. Alignment and/or superimposition can also be maintained while the viewer moves their head or body.

“Novel User Interfaces.”

“One subject of this invention is to provide an innovative user interface where the human eyes, including lid movements, including those induced by orbital and/orbital muscles and select skull muscles, are detected by the eye track system and processed to execute predefined computer commands.”

“Table 1” contains an exemplary list of eye movements, and lid movements, that the system can detect.

Click here to view the patent on Google Patents.