KEYWORDS: Transformers, Heart, Education and training, Computed tomography, Atherosclerosis, Angiography, Network architectures, Deep learning, Medicine, Medical research
Background: compare the performance of 2 novel deep learning networks—convolutional long short-term memory and transformer network—for artificial intelligence-based quantification of plaque volume and stenosis severity from CCTA. Methods: This was an international multicenter study of patients undergoing CCTA at 11 sites. The deep learning (DL) convolutional neural networks were trained to segment coronary plaque in 921 patients (5,045 lesions). The training dataset was further split temporally into training (80%) and internal validation (20%) datasets. The primary DL architecture was a hierarchical convolutional long short- term memory (ConvLSTM) network. This was compared against a TransUNet network, which combines the abilities of Vision Transformer with U-Net, enabling the capture of in-depth localization information while modeling long-range dependencies. Following training and internal validation, the both DL networks were applied to an external validation cohort of 162 patients (1,468 lesions) from the SCOT-HEART trial. Results: In the external validation cohort, agreement between DL and expert reader measurements was stronger when using the ConvLSTM network than with TransUNet, for both per-lesion total plaque volume (ICC 0·953 vs 0.830) and percent diameter stenosis (ICC 0·882 vs 0.735; both p<0.001). The ConvLSTM network showed higher per-cross-section overlap with expert reader segmentations (as measured by the Dice coefficient) compared to TransUnet, for vessel wall (0.947 vs 0.946), lumen (0.93 vs 0.92), and calcified plaque (0.87 vs 0.86; p<0.0001 for all), with similar execution times. Conclusions: In a direct comparison with external validation, the ConvLSTM network yielded higher agreement with expert readers for quantification of total plaque volume and stenosis severity compared to TransUnet, with faster execution times.
Purpose: Quantitative lung measures derived from computed tomography (CT) have been demonstrated to improve prognostication in coronavirus disease 2019 (COVID-19) patients but are not part of clinical routine because the required manual segmentation of lung lesions is prohibitively time consuming. We aim to automatically segment ground-glass opacities and high opacities (comprising consolidation and pleural effusion).Approach: We propose a new fully automated deep-learning framework for fast multi-class segmentation of lung lesions in COVID-19 pneumonia from both contrast and non-contrast CT images using convolutional long short-term memory (ConvLSTM) networks. Utilizing the expert annotations, model training was performed using five-fold cross-validation to segment COVID-19 lesions. The performance of the method was evaluated on CT datasets from 197 patients with a positive reverse transcription polymerase chain reaction test result for SARS-CoV-2, 68 unseen test cases, and 695 independent controls.Results: Strong agreement between expert manual and automatic segmentation was obtained for lung lesions with a Dice score of 0.89 ± 0.07; excellent correlations of 0.93 and 0.98 for ground-glass opacity (GGO) and high opacity volumes, respectively, were obtained. In the external testing set of 68 patients, we observed a Dice score of 0.89 ± 0.06 as well as excellent correlations of 0.99 and 0.98 for GGO and high opacity volumes, respectively. Computations for a CT scan comprising 120 slices were performed under 3 s on a computer equipped with an NVIDIA TITAN RTX GPU. Diagnostically, the automated quantification of the lung burden % discriminate COVID-19 patients from controls with an area under the receiver operating curve of 0.96 (0.95–0.98).Conclusions: Our method allows for the rapid fully automated quantitative measurement of the pneumonia burden from CT, which can be used to rapidly assess the severity of COVID-19 pneumonia on chest CT.
Background: Coronary computed tomography angiography (CCTA) allows non-invasive assessment of luminal stenosis and coronary atherosclerotic plaque. We aimed to develop and externally validate an artificial intelligence-based deep learning (DL) network for CCTA-based measures of plaque volume and stenosis severity. Methods: This was an international multicenter study of 1,183 patients undergoing CCTA at 11 sites. A novel DL convolutional neural network was trained to segment coronary plaque in 921 patients (5,045 lesions). The DL architecture consisted of a novel hierarchical convolutional long short-term memory (ConvLSTM) Network. The training set was further split temporally into training (80%) and internal validation (20%) datasets. Each coronary lesion was assessed in a 3D slab about the vessel centrelines. Following training and internal validation, the model was applied to an independent test set of 262 patients (1,469 lesions), which included an external validation cohort of 162 patients Results: In the test set, there was excellent agreement between DL and clinician expert reader measurements of total plaque volume (intraclass correlation coefficient [ICC] 0.964) and percent diameter stenosis (ICC 0.879; both p<0.001, see tables and figure). The average per-patient DL plaque analysis time was 5.7 seconds versus 25-30 minutes taken by experts. There was significantly higher overlap measured by the Dice coefficient (DC) for ConvLSTM compared to UNet (DC for vessel 0.94 vs 0.83, p<0.0001; DC for lumen and plaque 0.90 vs 0.83, p<0.0001) or DeepLabv3 (DC for vessel both 0.94; DC for lumen and plaque 0.89 vs 0.84, p<0.0001). Conclusions: A novel externally validated artificial intelligence-based network provides rapid measurements of plaque volume and stenosis severity from CCTA which agree closely with clinician expert readers.
e propose a fast and robust multi-class deep learning framework for segmenting COVID-19 lesions: Ground Glass opacities and High opacities (including consolidations and pleural effusion), from non-contrast CT scans using convolutional Long Short-Term Memory network for self-attention. Our method allows rapid quantification of pneumonia burden from CT with performance equivalent to expert readers. The mean dice score across 5 folds was 0.8776 with a standard deviation of 0.0095. A low standard deviation between results from each fold indicate the models were trained equally good regardless of the training fold. The cumulative per-patient mean dice score (0.8775±0.075) for N=167 patients, after concatenation, is consistent with the results from each of the 5 folds. We obtained excellent Pearson correlation (expert vs. automatic) of 0.9396 (p<0.0001) and 0.9843 (p<0.0001) between ground-glass opacity and high opacity volumes, respectively. Our model outperforms Unet2d (p<0.05) and Unet3d (p<0.05) in segmenting high opacities, has comparable performance with Unet2d in segmenting ground-glass opacities, and significantly outperforms Unet3d (p<0.0001) in segmenting ground-glass opacities. Our model performs faster on CPU and GPU when compared to Unet2d and Unet3d. For same number of input slices, our model consumed 0.83x and 0.26x the memory consumed by Unet2d and Unet3d.
Background: Coronary computed tomography angiography (CTA) allows quantification of stenosis. However, such quantitative analysis is not part of clinical routine. We evaluated the feasibility of utilizing deep learning for quantifying coronary artery disease from CTA. Methods: A total of 716 diseased segments in 156 patients (66 ± 10 years) who underwent CTA were analyzed. Minimal luminal area (MLA), percent diameter stenosis (DS), and percent contrast density difference (CDD) were measured using semi-automated software (Autoplaque) by an expert reader. Using the expert annotations, deep learning was performed with convolutional neural networks using 10-fold cross-validation to segment CTA lumen and calcified plaque. MLA, DS and CDD computed using deep-learning-based approach was compared to expert reader measurements. Results: There was excellent correlation between the expert reader and deep learning for all quantitative measures (r=0.984 for MLA; r=0.957 for DS; and r=0.975 for CDD, p<0.001 for all). The expert reader and deep learning method was not significantly different for MLA (median 4.3 mm2 for both, p=0.68) and CDD (11.6 vs 11.1%, p=0.30), and was significantly different for DS (26.0 vs 26.6%, p<0.05); however, the ranges of all the quantitative measures were within inter-observer variability between 2 expert readers. Conclusions: Our deep learning-based method allows quantitative measurement of coronary artery disease segments accurately from CTA and may enhance clinical reporting.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.