Lobectomy is a common and effective procedure for treating early-stage lung cancers. However, for patients with compromised pulmonary function (e.g. COPD) lobectomy can lead to major postoperative pulmonary complications. A technique for quantitatively predicting postoperative pulmonary function is needed to assist surgeons in assessing candidate’s suitability for lobectomy. We present a framework for quantitatively predicting the postoperative lung physiology and function using a combination of lung biomechanical modeling and machine learning strategies. A set of 10 patients undergoing lobectomy was used for this purpose. The image input consists of pre- and post-operative breath hold CTs. An automated lobe segmentation algorithm and lobectomy simulation framework was developed using a Constrained Adversarial Generative Networks approach. Using the segmented lobes, a patient-specific GPU-based linear elastic biomechanical and airflow model and surgery simulation was then assembled that quantitatively predicted the lung deformation during the forced expiration maneuver. The lobe in context was then removed by simulating a volume reduction and computing the elastic stress on the surrounding residual lobes and the chest wall. Using the deformed lung anatomy that represents the post-operative lung geometry, the forced expiratory volume in 1 second (FEV1) (the amount of air exhaled by a patient in 1 second starting from maximum inhalation), and forced vital capacity (FVC) (the amount of air exhaled by force from maximum inhalation), were then modeled. Our results demonstrated that the proposed approach quantitatively predicted the postoperative lobe-wise lung function at the FEV1 and FEV/FVC.
Adaptive radiotherapy is an effective procedure for the treatment of cancer, where the daily anatomical changes in the patient are quantified, and the dose delivered to the tumor is adapted accordingly. Deformable Image Registration (DIR) inaccuracies and delays in retrieving and registering on-board cone beam CT (CBCT) image datasets from the treatment system with the planning kilo Voltage CT (kVCT) have limited the adaptive workflow to a limited number of patients. In this paper, we present an approach for improving the DIR accuracy using a machine learning approach coupled with biomechanically guided validation. For a given set of 11 planning prostate kVCT datasets and their segmented contours, we first assembled a biomechanical model to generate synthetic abdominal motions, bladder volume changes, and physiological regression. For each of the synthetic CT datasets, we then injected noise and artifacts in the images using a novel procedure in order to mimic closely CBCT datasets. We then considered the simulated CBCT images for training neural networks that predicted the noise and artifact-removed CT images. For this purpose, we employed a constrained generative adversarial neural network, which consisted of two deep neural networks, a generator and a discriminator. The generator produced the artifact-removed CT images while the discriminator computed the accuracy. The deformable image registration (DIR) results were finally validated using the model-generated landmarks. Results showed that the artifact-removed CT matched closely to the planning CT. Comparisons were performed using the image similarity metrics, and a normalized cross correlation of >0.95 was obtained from the cGAN based image enhancement. In addition, when DIR was performed, the landmarks matched within 1.1 +/- 0.5 mm. This demonstrates that using an adversarial DNN-based CBCT enhancement, improved DIR accuracy bolsters adaptive radiotherapy workflow.
Our goal in this paper is to data mine the wealth of information contained in the dose-volume objects used in external
beam radiotherapy treatment planning. In addition, by performing computational pattern recognition on these mined
objects, the results may help identify predictors for unsafe dose delivery. This will ultimately enhance current clinical
registries by the inclusion of detailed dose-volume data employed in treatments. The most efficient way of including
dose-volume information in a registry is through DICOM RT objects. With this in mind, we have built a DICOM RT
specific infrastructure, capable of integrating with larger, more general clinical registries, and we will present the results
of data mining these sets.
Three-dimensional volumetric imaging correlated with respiration (4DCT) typically utilizes external breathing
surrogates and phase-based models to determine lung tissue motion. However, 4DCT requires time consuming post-processing
and the relationship between external breathing surrogates and lung tissue motion is not clearly defined. This
study compares algorithms using external respiratory motion surrogates as predictors of internal lung motion tracked in
real-time by electromagnetic transponders (Calypso® Medical Technologies) implanted in a canine model.
Simultaneous spirometry, bellows, and transponder positions measurements were acquired during free breathing and
variable ventilation respiratory patterns. Functions of phase, amplitude, tidal volume, and airflow were examined by
least-squares regression analysis to determine which algorithm provided the best estimate of internal motion. The cosine
phase model performed the worst of all models analyzed (R2 = 31.6%, free breathing, and R2 = 14.9%, variable
ventilation). All algorithms performed better during free breathing than during variable ventilation measurements. The
5D model of tidal volume and airflow predicted transponder location better than amplitude or either of the two phasebased
models analyzed, with correlation coefficients of 66.1% and 64.4% for free breathing and variable ventilation
respectively. Real-time implanted transponder based measurements provide a direct method for determining lung tissue
location. Current phase-based or amplitude-based respiratory motion algorithms cannot as accurately predict lung tissue
motion in an irregularly breathing subject as a model including tidal volume and airflow. Further work is necessary to
quantify the long term stability of prediction capabilities using amplitude and phase based algorithms for multiple lung
tumor positions over time.
In many patients respiratory motion causes motion artifacts in CT images, thereby inhibiting precise treatment planning and lowering the ability to target radiation to tumors. The 4D Phantom, which includes a 3D stage and a 1D stage that each are capable of arbitrary motion and timing, was developed to serve as an end-to-end radiation therapy QA device that could be used throughout CT imaging, radiation therapy treatment planning, and radiation therapy delivery. The dynamic accuracy of the system was measured with a camera system. The positional error was found to be equally likely to occur in the positive and negative directions for each axis, and the stage was within 0.1 mm of the desired position 85% of the time. In an experiment designed to use the 4D Phantom's encoders to measure trial-to-trial precision of the system, the 4D Phantom reproduced the motion during variable bag ventilation of a transponder that had been bronchoscopically implanted in a canine lung. In this case, the encoder readout indicated that the stage was within 10 microns of the sent position 94% of the time and that the RMS error was 7 microns. Motion artifacts were clearly visible in 3D and respiratory-correlated (4D) CT scans of phantoms reproducing tissue motion. In 4D CT scans, apparent volume was found to be directly correlated to instantaneous velocity. The system is capable of reproducing individual patient-specific tissue trajectories with a high degree of accuracy and precision and will be useful for end-to-end radiation therapy QA.
The mobility of lung tumors during the respiratory cycle is a source of error in radiotherapy treatment planning.
Spatiotemporal CT data sets can be used for studying the motion of lung tumors and inner organs during the
breathing cycle.
We present methods for the analysis of respiratory motion using 4D CT data in high temporal resolution. An
optical flow based reconstruction method was used to generate artifact-reduced 4D CT data sets of lung cancer
patients. The reconstructed 4D CT data sets were segmented and the respiratory motion of tumors and inner
organs was analyzed.
A non-linear registration algorithm is used to calculate the velocity field between consecutive time frames of
the 4D data. The resulting velocity field is used to analyze trajectories of landmarks and surface points. By
this technique, the maximum displacement of any surface point is calculated, and regions with large respiratory
motion are marked. To describe the tumor mobility the motion of the lung tumor center in three orthogonal
directions is displayed. Estimated 3D appearance probabilities visualize the movement of the tumor during the
respiratory cycle in one static image. Furthermore, correlations between trajectories of the skin surface and the
trajectory of the tumor center are determined and skin regions are identified which are suitable for prediction of
the internal tumor motion.
The results of the motion analysis indicate that the described methods are suitable to gain insight into the
spatiotemporal behavior of anatomical and pathological structures during the respiratory cycle.
Medical imaging applications of rigid and non-rigid elastic deformable image registration are undergoing wide scale
development. Our approach determines image deformation maps through a hierarchical process, from global to local
scales. Vemuri (2000) reported a registration method, based on levelset evolution theory, to morph an image along the
motion gradient until it deforms to the reference image. We have applied this level set motion method as basis to
iteratively compute the incremental motion fields and then we approximated the field using a higher-level affine and
non-rigid motion model. In such a way, we combine sequentially the global affine motion, local affine motion and local
non-rigid motion. Our method is fully automated, computationally efficient, and is able to detect large deformations if
used together with multi-grid approaches, potentially yielding greater registration accuracy.
Respiratory motion is a significant source of error in conformal radiation therapy for the thorax and upper abdomen. Four-dimensional computed tomography (4D CT) has been proposed to reduce the uncertainty caused by internal respiratory organ motion. A 4D CT dataset is retrospectively reconstructed at various stages of a respiratory cycle. An important tool for 4D treatment planning is deformable image registration. An inverse consistent image registration is used to model lung motion from one respiratory stage to another during a breathing cycle. This diffeomorphic registration jointly estimates the forward and reverse transformations providing more accurate correspondence between two images. Registration results and modeled motions in the lung are shown for three example respiratory stages. The results demonstrate that the consistent image registration satisfactorily models the large motions in the lung, providing a useful tool for 4D planning and delivering.
Respiratory motion is a significant source of error in radiotherapy treatment planning. 4D-CT data sets can be useful to measure the impact of organ motion caused by breathing. But modern CT scanners can only scan a limited region of the body simultaneously and patients have to be scanned in segments consisting of multiple slices. For studying free breathing motion multislice CT scans can be collected simultaneously with digital spirometry over several breathing cycles. The 4D data set is assembled by sorting the free breathing multislice CT scans according to the couch position and the tidal volume. But artifacts can occur because there are no data segments for exactly the same tidal volume and all couch positions. We present an optical flow based method for the reconstruction of 4D-CT data sets from multislice CT scans, which are collected simultaneously with digital spirometry. The optical flow between the scans is estimated by a non-linear registration method. The calculated velocity field is used to reconstruct a 4D-CT data set by interpolating data at user-defined tidal volumes. By this technique, artifacts can be reduced significantly. The reconstructed 4D-CT data sets are used for studying inner organ motion during the respiratory cycle. The procedures described were applied to reconstruct 4D-CT data
sets for four tumour patients who have been scanned during free breathing. The reconstructed 4D data sets were used to quantify organ displacements and to visualize the abdominothoracic organ motion.
Issam El Naqa, Daniel Low, Gary Christensen, Parag Parikh, Joo Hyun Song, Michelle Nystrom, Wei Lu, Joseph Deasy, James Hubenschmidt, Sasha Wahab, Sasa Mutic, Anurag Singh, Jeffrey Bradley
We are developing 4D-CT to provide breathing motion information (trajectories) for radiation therapy treatment planning of lung cancer. Potential applications include optimization of intensity-modulated beams in the presence of breathing motion and intra-fraction target volume margin determination for conformal therapy. The images are acquired using a multi-slice CT scanner while the patient undergoes simultaneous quantitative spirometry. At each couch position, the CT scanner is operated in ciné mode and acquires up to 15 scans of 12 slices each. Each CT scan is associated with the measured tidal volume for retrospective reconstruction of 3D CT scans at arbitrary tidal volumes. The specific tasks of this project involves the development of automated registration of internal organ motion (trajectories) during breathing. A modified least-squares based optical flow algorithm tracks specific features of interest by modifying the eigenvalues of gradient matrix (gradient structural tensor). Good correlations between the measured motion and spirometry-based tidal volume are observed and evidence of internal hysteresis is also detected.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.