Invented by Martin Emile Lachaine, Silvain Beriault, Elekta Inc
The Elekta Inc invention works as followsSystems and Techniques may be used to estimate the patient’s state during radiotherapy treatments. A method could include, for example, creating a dictionary of expanded patient measurements and their corresponding patient states by using a preliminary movement model. The method can include using machine learning to train a correspondence model that relates an input patient state to an output patient measurement. The method can include estimating the patient state using the correspondence model using a processor.
Background for Machine Learning Approach to Real-Time Patient Motion Monitoring
In radiotherapy and radiosurgery treatment planning is usually performed using medical images. This requires delineation and identification of critical organs and target volumes in the images. When the patient moves (e.g. breathing), it can be difficult to accurately track the different objects.
Current methods are not able to measure in real time a patient’s changing state.” Some techniques, for example, use 2D images, such as 2D kV projects or 2D MRI slicing, which cannot track all objects.
Other techniques may use surface information to detect the patient, either directly by tracking markers or through a box or vest attached to the patient. These techniques assume the surface information correlates to patient internal state, which can be inaccurate.
Other techniques could rely on the use of radio-opaque radio-tracked markers or magnetically tracked markers. These techniques are invasive, and only cover a limited area of the patient.
In the following detailed description, reference is made to the accompanying drawings which form a part hereof, and which is shown by way of illustration-specific embodiments in which the present invention may be practiced. These embodiments are also called ‘examples’ in the following description. The embodiments are sufficiently detailed to allow those of ordinary skill to make use of the invention. It is understood that they may be combined or other embodiments can be used, as well as structural, logical, and electrical modifications, without departing from scope. The detailed description should not be interpreted as a limitation, and the scope is determined by the claims and their equivalents.
Image guided radiation therapy is a technique which uses imaging to determine the position of a patient in treatment immediately before irradiation. This allows for more precise targeting of anatomy such as organs, tumours or organs at risk. When the patient will be expected to move, such as when breathing causes a lung tumour to move in a quasi-periodic pattern, or the bladder filling up causes the prostate to drift, extra margins can be added around the target area to accommodate the movement. These larger margins are at the cost of a high dose to normal tissue surrounding the target, which can lead to more side effects.
IGRT can use CT imaging, cone beam CT imaging (CBCT), magnetic imaging (MR) or PET imaging to obtain a 3-D or 4-D image of a subject prior to irradiation.” A CBCT-enabled linac device (linear accelerator), for example, may consist a kV sensor/source affixed at a 90-degree angle to a radiation wave, or a MR Linac device, which may be a linac directly integrated with an MR scan.
Localizing movement during actual irradiation delivery (intrafraction) may allow for reduction of additional treatment margins which would otherwise be used in order to encompass motion. This could either allow higher doses to delivered or reduce side effects, or both. Most IGRT imaging techniques are not fast enough to capture intrafractional movement. “For example, CBCT needs multiple kV pictures from different angles to reconstruct a 3D patient image. A 3D MR also requires multiple 2D slices or filling the entire 3D k space, which can take several minutes each to produce a 3D image.
In some cases, real-time, or quasi-real time, data, which would normally be acquired before generating a 3D IGRT, can be used to estimate an instantaneous 3D picture at a faster refresh rate, from the incomplete but fast stream of incoming info. 2D kV images or 2D MR slices can be used to estimate a 3D CBCT or 3D MR image that changes with patient movement during treatment. These 2D images are fast but only provide a partial 3D image.
A patient-state generator can receive partial measurements as input (e.g. a 2D picture) and produce (e.g. estimate) a 3D patient image as output (e.g. a 3D photo). The generator can use one partial measurement (current), a predicted partial measurement (future) or past partial measure (past), or a collection of partial measurements (e.g. the last 10). These partial measurements can be taken from a single modality, like an x ray projection or MRI, or multiple modalities. For example, the positions of reflective surface marks on the patient’s skin synchronized to x ray projections. A patient’s state can be a 3D picture or?multi-modality? The patient state could include multiple 3D images, each of which provides different information about the patient’s condition. For example, the state could be a combination of two or more 3D pictures that provide different information. For enhanced tissue contrast a “CT-like” For high geometric accuracy, voxels that relate to density are helpful for dose calculation. To provide information on the function of a patient. The patient state can also include information that is not imaging related. “A patient state can include any number of points of interest, such as the target position, contours, surfaces or deformation vector fields.
Partial measurements as described above can be received, for example, in a stream of real-time images (e.g. 2D images), taken by a kV or MR imager. The kV imaging system can produce 2D stereoscopic images (e.g. two orthogonal x-rays acquired almost simultaneously) for the real-time streaming. The kV imaging device can be attached to an apparatus (e.g. a gantry) or fixed to the wall of a room. The MR imaging device can produce 2D MR images that are orthogonal or paralell. An image or pair received images can be used to generate a patient state. “For example, at any moment in time the patient state of the last image received from the real-time streaming may be generated.
In one example, the patient model could be based on the data collected during a fraction (after the patient has been set up, but before the beam is switched on), or from another fraction. It may also be based upon the data from other patients or generalized anatomy of the patient, mechanical models or any other information which may help define a state for the patient from partial measurements. The patient model may be a pre-treatment 4D dataset that represents the changes in the patient’s state over a short period of time. One representative respiratory cycle. The patient model can be trained (e.g. using machine learning techniques) to relate an input measurement (e.g. an image or pair from a real time stream) to the output state. For example, using a dictionary that defines constructed measurements and corresponding states. The patient model can be warped using a deformation field (DVF) in relation to one or more parameters, resulting in a patient state.
The patient model of a 4D dataset can include a patient state that changes with a single parameter such as the phase in a breathing cycle. The patient model can be used to create a patient state that changes over time, based on a representative respiratory cycle. Each breath may be treated as similar. It simplifies modeling because chunks of partial image data from different breathing cycles can be assigned to one representative breathing cycle. The 3D image can then be reconstructed from each phase “bin?”.
The patient’s state can be shown, for instance, as a 3D-image or 3D-DVF plus a 3D-reference image. They may be equivalent since elements of both the 3D DVF, and the 3D Reference Image, can be used to produce the 3D Image (e.g. deform the 3D Reference image using the 3D DVF).
FIG. “FIG. The patient state estimation is done to allow the radiotherapy system provide radiation therapy based on certain aspects of medical imaging data. Radiation therapy system comprises an image processing computing systems 110 that hosts patient state processing logic. The image processing computing systems 110 can be connected to a computer network (not shown) and this network could be connected to the Internet. A network, for example, can connect the image-processing computing system 110 to one or several medical information sources, such as a radiology system, an electronic health record system, or an oncology system, one or multiple image data sources, an image acquisition system 170, or a treatment device (e.g. a radiation therapy system). The image processing computing system can, for example, be configured to execute instructions or data from patient state processing logic to perform image operations. This is part of operations that generate and customize treatment plans using the treatment device 180.
The memory 114 can be a read-only (ROM), phase-change random-access memory (PRAM), static random-access memory (SRAM), flash memory, random-access memory (RAM), dynamic random-access memory (DRAM), such as synchronous (SDRAM), electrically eraseable programmable-read-only (EEPROM), static memory as well as any other type non-transitory media that can store information, including images, data or computer executable instruction (e.g. stored in any form The processing circuitry can, for example, access the computer program instructions, read them from the ROM or another suitable memory location and load them into the RAM to be executed by the circuitry 112.
The storage device 116 can be a drive unit with a machine-readable media that contains one or several sets of instructions or data structures (e.g. software) that embody or are utilized by one or more methods or functions described in this document (including, for example, the patient state logic 120 and user interface 140). The instructions can also reside, either completely or partially, in the memory 112 and/or the processing circuitry 114 during their execution by the image-processing computing system 110. Both the memory 112 and the processing logic 112 are machine-readable media.
The memory device or storage device 116 can be a non-transitory medium computer-readable. The computer-readable medium can, for example, be used to store or load software application instructions on the memory device or storage device. The software applications that are stored or loaded on the memory device or storage device 116 can include an operating system, such as one for common computer systems or for software-controlled device. The image processing computing systems 110 can also run a number of software programs that include software code to implement the patient state processing logic and the user interface. The memory device 114 or the storage device can also store or load a complete software application or part of one, as well as code or data associated with the software application. This software application is then executable by the circuitry 112. The memory device 114 and the storage device can also be used to store, load or manipulate data such as imaging data, patient data, dictionaries, artificial intelligence models, labels, mapping data etc. Software programs can be stored on removable computer media, including a hard disk, computer disks, DVDs, HDs, Blu-Ray DVDs, USB flash drives, SD cards, memory sticks, and other suitable devices.
Although it is not shown, the image-processing computing system 110 can include a communications interface, network interface cards, and communications circuitry. A communication interface example may include a network connector, cable connectors, USB connectors, parallel connectors, high-speed data transfer adaptors (e.g. fiber optics, USB 3.0 and thunderbolt), wireless network adaptors (e.g. IEEE 802.11/WiFi adapters), telecommunications adapters (e.g. to communicate with networks such as 3G, LTE and 5G) and similar. A communication interface can include digital and/or analogue communication devices which allow a machine or device to communicate with another machine and/or device, such as remote components, over a network. The network can provide functionality such as a local network (LAN), wireless network, cloud computing environment, (e.g. software as service, platform service, infrastructure service, etc.). A client-server network, a wide-area network (WAN), or similar is possible. “A network can be a local area network (LAN) or a wide-area network (WAN), which may also include other systems, such as additional image processing computing system or image-based components that are associated with radiotherapy or medical imaging operations.Click here to view the patent on Google Patents.