KEYWORDS: Breast cancer, Image classification, Binary data, Statistical analysis, Tumors, Magnetic resonance imaging, Visualization, Education and training, Image segmentation, Breast
PurposeCurrent clinical assessment qualitatively describes background parenchymal enhancement (BPE) as minimal, mild, moderate, or marked based on the visually perceived volume and intensity of enhancement in normal fibroglandular breast tissue in dynamic contrast-enhanced (DCE)-MRI. Tumor enhancement may be included within the visual assessment of BPE, thus inflating BPE estimation due to angiogenesis within the tumor. Using a dataset of 426 MRIs, we developed an automated method to segment breasts, electronically remove lesions, and calculate scores to estimate BPE levels.ApproachA U-Net was trained for breast segmentation from DCE-MRI maximum intensity projection (MIP) images. Fuzzy c-means clustering was used to segment lesions; the lesion volume was removed prior to creating projections. U-Net outputs were applied to create projection images of both, affected, and unaffected breasts before and after lesion removal. BPE scores were calculated from various projection images, including MIPs or average intensity projections of first- or second postcontrast subtraction MRIs, to evaluate the effect of varying image parameters on automatic BPE assessment. Receiver operating characteristic analysis was performed to determine the predictive value of computed scores in BPE level classification tasks relative to radiologist ratings.ResultsStatistically significant trends were found between radiologist BPE ratings and calculated BPE scores for all breast regions (Kendall correlation, p<0.001). Scores from all breast regions performed significantly better than guessing (p<0.025 from the z-test). Results failed to show a statistically significant difference in performance with and without lesion removal. BPE scores of the affected breast in the second postcontrast subtraction MIP after lesion removal performed statistically greater than random guessing across various viewing projections and DCE time points.ConclusionsResults demonstrate the potential for automatic BPE scoring to serve as a quantitative value for objective BPE level classification from breast DCE-MR without the influence of lesion enhancement.
KEYWORDS: Image segmentation, Breast, 3D image processing, 3D imaging standards, Magnetic resonance imaging, Education and training, Cross validation, Artificial intelligence, 3D image enhancement, 3D modeling
PurposeGiven the dependence of radiomic-based computer-aided diagnosis artificial intelligence on accurate lesion segmentation, we assessed the performances of 2D and 3D U-Nets in breast lesion segmentation on dynamic contrast-enhanced (DCE) magnetic resonance imaging (MRI) relative to fuzzy c-means (FCM) and radiologist segmentations.ApproachUsing 994 unique breast lesions imaged with DCE-MRI, three segmentation algorithms (FCM clustering, 2D and 3D U-Net convolutional neural networks) were investigated. Center slice segmentations produced by FCM, 2D U-Net, and 3D U-Net were evaluated using radiologist segmentations as truth, and volumetric segmentations produced by 2D U-Net slices and 3D U-Net were compared using FCM as a surrogate reference standard. Fivefold cross-validation by lesion was conducted on the U-Nets; Dice similarity coefficient (DSC) and Hausdorff distance (HD) served as performance metrics. Segmentation performances were compared across different input image and lesion types.Results2D U-Net outperformed 3D U-Net for center slice (DSC, HD p < 0.001) and volume segmentations (DSC, HD p < 0.001). 2D U-Net outperformed FCM in center slice segmentation (DSC p < 0.001). The use of second postcontrast subtraction images showed greater performance than first postcontrast subtraction images using the 2D and 3D U-Net (DSC p < 0.05). Additionally, mass segmentation outperformed nonmass segmentation from first and second postcontrast subtraction images using 2D and 3D U-Nets (DSC, HD p < 0.001).ConclusionsResults suggest that 2D U-Net is promising in segmenting mass and nonmass enhancing breast lesions from first and second postcontrast subtraction MRIs and thus could be an effective alternative to FCM or 3D U-Net.
Computer-aided diagnosis based on features extracted from medical images relies heavily on accurate lesion segmentation before feature extraction. Using 994 unique breast lesions imaged with dynamic contrast-enhanced (DCE) MRI, several segmentation algorithms were investigated. The first method is fuzzy c-means (FCM), a well-established unsupervised clustering algorithm used on breast MRIs. The second and third methods are based on the convolutional neural network U-Net, a widely-used deep learning method for image segmentation—for two- or three-dimensional MRI data, respectively. The purpose of this study was twofold—1) to assess the performances of 2D (slice-by-slice) and 3D U-Nets in breast lesion segmentation on DCE-MRI trained with FCM segmentations, and 2) compare their performance to that of FCM. Center slice segmentations produced by FCM, 2D U-Net, and 3D U-Net were evaluated using radiologist segmentations as truth, and volumetric segmentations produced by 2D U-Net (slice-by-slice) and 3D U-Net were compared using FCM as a surrogate truth. Five-fold cross-validation was conducted on the U-Nets and Dice similarity coefficient (DSC) and Hausdorff distance (HD) were used as performance metrics. Although 3D U-Net performed well, 2D U-Net outperformed 3D U-Net, both for center slice (DSC p=4.13×10-9, HD p=1.40×10-2) and volume segmentations (DSC p=2.72×10-83, HD p=2.28×10-10). Additionally, 2D U-Net outperformed FCM in center slice segmentation in terms of DSC (p=1.09×10-7). The results suggest that 2D U-Net is promising in segmenting breast lesions and could be an effective alternative to FCM.
During radiologists’ visual assessment of background parenchymal enhancement (BPE) on dynamic contrast enhanced (DCE)-MR images, presence of a tumor may erroneously inflate the BPE estimation due to angiogenesis within the tumor. With a dataset of 426 MRIs, we present an automated method to segment breasts, electronically remove the influence of lesion presence, and calculate scores to estimate BPE levels. A U-Net was trained for breast segmentation from maximum intensity projection (MIP) images. Next, fuzzy c-means (FCM) clustering was used to segment the lesions from the breast DCE-MRIs, and the lesion volume was removed to create MIP images without the influence of the lesion. U-Net outputs were applied to create MIP images of both breasts, affected breasts, and unaffected breasts before and after lesion removal. On an independent test set, a statistically significant trend was found between the radiologist BPE ratings and the calculated BPE scores for all breast regions (Kendall correlation, p < 0.001). Receiver operating characteristic (ROC) analysis was performed to determine the predictive value of the computed scores from each breast region in the binary tasks of classifying Minimal vs. Marked and Low vs. High BPE relative to a radiologist rating. Scores from all breast regions performed significantly better than guessing (p < 0.025 from the z-test) with BPE scores of the affected breast after lesion removal performing the best (AUC = 0.87). Results demonstrate the potential for an automatic BPE prediction from breast DCE-MR without the influence of lesion enhancement.
We investigated the additive role of breast parenchyma stroma in the computer-aided diagnosis (CADx) of tumors on full-field digital mammograms (FFDM) by combining images of the tumor and contralateral normal parenchyma information via deep learning. The study included 182 breast lesions in which 106 were malignant and 76 were benign. All FFDM images were acquired using a GE 2000D Senographe system and retrospectively collected under an Institution Review Board (IRB) approved, Health Insurance Portability and Accountability Act (HIPAA) compliant protocol. Convolutional neutral networks (CNNs) with transfer learning were used to extract image-based characteristics of lesions and of parenchymal patterns (on the contralateral breast) directly from the FFDM images. Classification performance was evaluated and compared between analysis of only tumors and that of combined tumor and parenchymal patterns in the task of distinguishing between malignant and benign cases with the area under the Receiver Operating Characteristic (ROC) curve (AUC) used as the figure of merit. Using only lesion image data, the transfer learning method yielded an AUC value of 0.871 (SE=0.025) and using combined information from both lesion and parenchyma analyses, an AUC value of 0.911 (SE=0.021) was observed. This improvement was statistically significant (p-value=0.0362). Thus, we conclude that using CNNs with transfer learning to combine extracted image information of both tumor and parenchyma may improve breast cancer diagnosis.
KEYWORDS: Digital breast tomosynthesis, Breast, Convolutional neural networks, Computer aided diagnosis and therapy, Feature extraction, Breast cancer, Digital mammography, Image classification, Databases
With growing adoption of digital breast tomosynthesis (DBT) in breast cancer screening protocols, it is important to compare the performance of computer-aided diagnosis (CAD) in the diagnosis of breast lesions on DBT images compared to conventional full-field digital mammography (FFDM). In this study, we retrospectively collected FFDM and DBT images of 78 lesions from 76 patients, each containing lesions that were biopsy-proven as either malignant or benign. A square region of interest (ROI) was placed to fully cover the lesion on each FFDM, DBT synthesized 2D images, and DBT key slice images in the cranial-caudal (CC) and mediolateral-oblique (MLO) views. Features were extracted on each ROI using a pre-trained convolutional neural network (CNN). These features were then input to a support vector machine (SVM) classifier, and area under the ROC curve (AUC) was used as the figure of merit. We found that in both the CC view and MLO view, the synthesized 2D image performed best (AUC = 0.814, AUC = 0.881 respectively) in the task of lesion characterization. Small database size was a key limitation in this study, and could lead to overfitting in the application of the SVM classifier. In future work, we plan to expand this dataset and to explore more robust deep learning methodology such as fine-tuning.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.