Dynamic chest radiography (DCR) enables the evaluation of lung function based on changes in lung density, lung area, and diaphragm level due to respiration. The need for lung segmentation techniques for sequential chest images is growing. Thus, this study was aimed at developing a deep learning–based lung segmentation technique for DCR across all age groups using virtual patient images. DCR images of 53 patients (M:F = 34:19, age: 1–88 years, median age: 63 years) were used. Owing to the difficulty in collecting a large dataset of pediatric DCR images, the 4D extended cardiac-torso phantom (XCAT phantom) was used to augment the pediatric data. A total of ten pediatric XCAT phantoms (five males and females each, age: 0–15 years) of virtual patients were generated and projected. Two deep-learning models, U-net and DeepLabv3 using MobileNetv3 as the backbone, were implemented. They were trained to estimate the lung segmentation masks using DCR image datasets consisting of only real or a mixture of real and virtual patients. Dice similarity coefficient (DSC) and intersection over union (IoU) were used as evaluation metrics. When trained only on real patients, for both the metrics, DeepLabv3 (DSC/IoU: 0.902/0.822) exhibited higher values than U-net (DSC/IoU: 0.791/0.673). When trained on dataset of a mixture of real and virtual patients, values of both the metrics improved in both models (DSC/IoU: 0.906/0.828 and 0.795/0.677 for DeepLabv3 and U-net, respectively). These results indicate that the developed model, that is, the combination of DeepLabv3 and XCAT-based augmentation methods, is effective for the lung segmentation of DCR images of various respiratory phases for all age groups.
We aimed to investigate the feasibility of predicting pleural invasion or adhesion of lung cancers with dynamic chest radiography (DCR), using a four-dimensional (4D) extended cardiac-torso (XCAT) computational phantom. An XCAT phantom of an adult man (50th percentile in height and weight) with forced breathing and normal heart rate was generated. To simulate lung cancers with and without pleural invasion, 30-mm diameter tumor spheres were inserted into the right lower lung lobe of the virtual patients. Subsequently, the virtual patient was imaged using an X-ray simulator in posteroanterior and oblique directions, and bone suppression (BS) images were then created. The measurement points (tumor, rib, and diaphragm) were automatically tracked on projection images by template matching. We calculated five quantitative parameters related to the movement distance and directions of the targeted tumor and evaluated the ability of the DCR parameters to distinguish between patients with and without pleural invasion. Precise tracking of the targeted tumor was achieved on the BS images without any interruption by the rib shadows. The movement distance was an effective parameter to evaluate tumor invasion; however, with regard to the other parameters, similar results were obtained between the lung cancers with and without pleural invasion due to the lack of three-dimensional information on the projection images. The oblique views were useful for evaluation of the space between the chest wall and the moving tumor. DCR could help distinguish between patients with and without pleural invasion based on the two-dimensional movement distance in both oblique and posteroanterior projection views.
The purpose of this study was to develop a lung segmentation based on a deep learning approach for dynamic chest radiography, and to assess the clinical utility for pulmonary function assessment. Maximum inhale and exhale images were selected in dynamic chest radiographs of 214 cases, comprising 150 images during respiration. In total, 534 images (2 to 4 images per case) with annotations were prepared for this study. Three hundred images were fed into a fullyconvolutional neural network (FCNN) architecture to train a deep learning model for lung segmentation, and 234 images were used for testing. To reduce misrecognition of the lung, post processing methods on the basis of time-series information were applied to the resulting images. The change rate of the lung area was calculated throughout all frames and its clinical utility was assessed in patients with pulmonary diseases. The Sorenson-Dice coefficients between the segmentation results and the gold standard were 0.94 in inhale and 0.95 in exhale phases, respectively. There were some false recognitions (214/234), but 163 were eliminated by our post processing. The measurement of the lung area and its respiratory change were useful for the evaluation of lung conditions; prolonged expiration in obstructive pulmonary diseases could be detected as a reduced change rate of the lung area in the exhale phase. Semantic segmentation deep learning approach allows for the sequential lung segmentation of dynamic chest radiographs with high accuracy (94%) and is useful for the evaluation of pulmonary function.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.