PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 13174, including the Title Page, Copyright information, Table of Contents, and Conference Committee information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Contrast-enhanced digital breast tomosynthesis and mammography
We report on an optimization-based image reconstruction algorithm for contrast enhanced digital breast tomosynthesis (DBT) using dual-energy scanning. The algorithm is designed to enable quantitative imaging of Iodine-based contrast agent by mitigating the depth blur artifact. The depth blurring is controlled by exploiting gradient sparsity of the contrast agent distribution. We find that minimization of directional total variation (TV) is particularly effective at exploiting gradient sparsity for the DBT scan configuration. In this initial work, the contrast agent imaging is performed by reconstructing images from DBT data acquired at source potentials of 30- and 49-kV, followed by weighted subtraction to suppress background glandular structure and isolate the contrast agent distribution. The algorithm is applied to DBT data, acquired with a Siemens Mammomat scanner, of a structured breast phantom with Iodine contrast agent inserts. Results for both in-plane and transverse-plane imaging for directional TV minimization are presented alongside images reconstructed by filtered back-projection for reference. It is seen that directional TV is able to substantially reduce depth blur for the Iodine-based contrast agent objects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Deep learning models are the state-of-the-art for most medical imaging application, including mammography. However, large amounts of data are generally required for their usage. Radiomic analysis has shown the potential to improve clinical decision support systems for small datasets. One of the challenges facing the clinical implementation of radiomics is reproducibility. Our goal is to show that assessing radiomic features uncertainty can improve the robustness and performance of radiomic-based prediction models for contrast-enhanced digital mammographic images. Additionally, we propose the use of a pretrained tomosynthesis (DBT) lesion detection model as feature extractor for the boosting of the prediction framework. The prediction goal was the immunohistochemical status of breast cancer in 33 patients. We assessed two sources of uncertainty: misalignment between the subtracted images and region-of-interest delineation variability. Including uncertainties in the training step improved the performance of the prediction models, and the use of the DBT lesion detection model to boost the prediction improved the overall radiomic model performance for PR, ER and Ki67 receptors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present an automated method to generate synthetic contrast-enhanced mammography cases with simulated microcalcification clusters. This method accounts for existing textures in the breast, with the simulated clusters inserted in the low-energy image. In parallel, potential mass-like enhancement is modelled from real values in the recombined image. The same deep learning model was trained with different amounts and ratios of real and synthetic data. When trained with real data only, malignant masses are more often correctly detected and classified than malignant microcalcification clusters. The addition of synthetic data with simulated clusters during training could increase detection sensitivity for all types of malignant lesions and maintained similar levels of AUC for classification. This enhanced performance was consistent on both internal and external test sets. These findings demonstrate the potential applicability of synthetic data to enhance deep learning models, especially when real data are scarce or imbalanced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Contrast enhanced digital mammography (CEDM) and contrast enhanced digital breast tomosynthesis (CEDBT) highlight the uptake of iodinated contrast agent in breast lesions in dual-energy (DE) subtracted images. In conventional methods, low-energy (LE) and high-energy (HE) images are acquired with two separate exposures, referred to as the dual-shot (DS) method. Patient motion between two exposures could result in residual breast tissue structure in DE images, which reduces iodinated lesion conspicuity. We propose to use a direct-indirect dual-layer flat-panel detector (DI-DLFPD) to acquire LE and HE images simultaneously, thereby eliminating the motion artifact. The DI-DLPFD system comprise a k-edge filter at the tube output, an amorphous-selenium (a-Se) direct detector as the front layer, and a cesium iodide (CsI) indirect detector as the back layer. This study presents the CEDM and CEDBT results from the first prototype DI-DLFPD. For comparison, CEDM and CEDBT images were also acquired with DS technique, with simulated 2mm patient motion between LE and HE exposures. The figure of merit (FOM) used to assess iodinated object detectability is the dose normalized signal difference to noise ratio squared. Our results showed that DI-DLFPD images exhibit complete cancellation of breast tissue structure, which led to significant improvement in iodinated object detectability and more accurate iodine quantification, compared to DS images with simulated patient motion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this study, we investigate the performance of advanced 2D acquisition geometries--Pentagon and T-shaped--in digital breast tomosynthesis (DBT) and compare them against the conventional 1D geometry. Unlike the conventional approach, our proposed 2D geometries also incorporate anterior projections away from the chest wall. Implemented on the Next-Generation Tomosynthesis (NGT) prototype developed by X-ray Physics Lab (XPL), UPenn, we utilized various phantoms to compare three geometries: a Defrise slab phantom with alternating plastic slabs to study low-frequency modulation; a Checkerboard breast phantom (a 2D adaptation of the Defrise phantom design) to study the ability to reconstruct the fine features of the checkerboard squares; and the 360° Star-pattern phantom to assess aliasing and compute the Fourier-spectral distortion (FSD) metric that assesses spectral leakage and the contrast transfer function. We find that both Pentagon and T-shaped scans provide greater modulation amplitude of the Defrise phantom slabs and better resolve the squares of the Checkerboard phantom against the conventional scan. Notably, the Pentagon geometry exhibited a significant reduction in aliasing of spatial frequencies oriented in the right-left (RL) medio-lateral direction, which was corroborated by a near complete elimination of spectral leakage in the FSD plot. Conversely T-shaped scan redistributes the aliasing between both posteroanterior (PA) and RL directions thus maintaining non-inferiority against the conventional scan which is predominantly affected by PA aliasing. The results of this study underscore the potential of incorporating advanced 2D geometries in DBT systems, offering marked improvements in imaging performance over the conventional 1D approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Power-law phantoms have been useful for the assessment of imaging properties of breast imaging systems. Recent advances in 3D printing methodologies have enabled printing of 3D objects with density variations, and a physical 3D-printed power-law phantom was created. The purpose of this study was to explore the characteristics of phantom images acquired with a commercial breast tomosynthesis system, and to compare the results with prior findings using breast images. A 3D phantom was printed using PixelPrint. The texture variations in such phantoms are described by the power-law exponent beta. The design beta of the phantom model was 3.4. The printed phantom was imaged on a Hologic Selenia 3Dimensions breast tomosynthesis unit in 2D and 3D imaging modes. Power-spectrum analysis was performed on 2D and 3D images to obtain estimates of beta. Visual inspection of the images revealed grid artifacts in the phantom from the printing process. For power-law analysis, these regions were excluded by applying a mask in the Fourier domain. The observed differences of the power-law exponent in projection and tomosynthesis images (0.24) was similar to those in patient images studies (0.17 to 0.21). Power spectral analysis of a novel 3D-printed power-law phantom resulted in changes of beta in phantom images similar to what was observed in patient data. This indicates that such phantoms predict tomosynthesis image characteristics of breast images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To quantitatively study the non-Gaussian statistical properties of breast parenchymal texture on digital breast tomosynthesis (DBT) images acquired from two vendors. This IRB approved retrospective study included patients who had normal screening DBT exams performed on a GE unit in January 2018 and normal screening DBT exams in adjacent years (2017 or 2019) from a Hologic unit. We use Laplacian Fractional Entropy (LFE) as a measure of the non-Gaussian statistical properties from DBT images. Fifty-four Gabor filters that spanned 9 center frequencies from 0.15 to 2.0 cyc/mm were constructed at six different orientations (0 to150 deg). Filter responses were generated by convolving each filter with a Cranial-Caudal (CC) view DBT slice. All responses from 6 mm inside the boundary of the breast in each slice were used to form response histograms from which the LFE measures at different frequencies were estimated. The histograms binned the central 99% of the responses. The averaged LFE results among the central 80% of DBT slices were reported for each exam and compared between the two vendors. A total of 7,894 DBT slices from 69 exams in 25 women were included. Significant differences in LFE, were observed between DBT images acquired from Hologic and GE, using images from the same 25 patients in consecutive years. There are quantitative differences in non-Gaussian properties from the presentation of breast imaging texture between DBT vendors. Our findings have relevance and importance for external validation of AI algorithms across vendors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital Breast Tomosynthesis (DBT) is a pseudo 3D x-ray imaging technique that limits the tissue superimposition problem observed in 2D mammography. DBT is therefore on track to become a standard of care for breast cancer screening. However, patient motion during examination is relatively common and may compromise the consistency of the reconstruction problem and decrease the conspicuity of clinical features in the resulting 3D volumes. Dynamic reconstruction of motion-polluted cases is therefore essential for patient care. The reconstruction problem is enriched to include estimation and correction of patient motion in 3D. The resolution of the dynamic problem is a two-stage process alternating between a motion-corrected reconstruction based on the SIRT algorithm and a motion estimation based on the Projection-based Digital Volume Correlation (P-DVC) method. It is coupled with a multiscale coarse-to-fine procedure which allows to capture both large and fine displacements. Additionally, the dynamic reconstruction is focused on a local region to simplify the kinematic description of patient motions and limit computation time. The method was applied to 63 local regions throughout 19 DBTs showcasing motion artefacts. It enabled to significantly reduce the objective function, correct motion artefacts and visualize smaller details previously blurred. Dynamic tomosynthesis improves the reconstruction problem consistency and image quality by enhancing the visibility of small critical clinical features. Local reconstruction around areas of interest is a feature which helps radiologists to focus on specific details while limiting the computation time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The purpose of this study was to test the generalizability of our sample-efficient lesion detection framework for biopsy-proven breast lesions detection on digital breast Tomosynthesis (DBT). We developed a sample-efficient breast lesion detection framework using a set of limited biopsied DBT lesions. Instead of using large in-house lesion dataset that only a few can access, we utilized non-biopsied false positive findings to augment the limited training set. We applied our framework on open-source single and multi-stage Convolutional Neural Network based object detectors to show the generalizability of our framework. Then, we combined different detector models using ensemble approach to further improve the detection performance. Using a challenge validation set, we achieved detection performance (a mean sensitivity of 0.84 FPs per DBT volume and sensitivity of 0.80 at 2 false positives per image) close to one of top-ranking algorithms in the DBT lesion detection challenge which augmented the training set with a large in-house mammogram dataset.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The accurate quantification of breast density in screening programs can aid assessing breast cancer risk and potential for lesion masking. With increasing use of digital breast tomosynthesis (DBT) and ongoing studies on its application in breast cancer screening, it is important to be able to accurately quantify the breast density , i.e., determine the breast glandularity, from the pseudo-3D DBT images. In this work we propose a non-learning regularization method that compensates for limited angle artifacts to estimate the breast density, without relying on the precise localization of the tissue structures inside the breast. Drawing inspiration from the phenomenon of gravitational accretion, we established a correspondence between the reconstructed fractions of each tissue type, i.e., adipose and fibro-glandular tissues, with the number of particles with attractive interactions. This creates a redistribution by clustering of the tissue fractions that allows the material identification of each voxel and the breast density quantification. We extend our previous work by 1) improving the mechanics of the redistribution and 2) adding realistic noise to our simulations. We evaluated our method using 45 3D digital breast phantoms based on segmented breast CT patient. We aimed to obtain rapid glandularity estimations to be a clinically-relevant method. In the noise-free scenario, the difference between the actual and reconstructed glandularities was, on average, −0.012, ranging between −0.087 and +0.044; while in the noisy scenario it was −0.001, ranging between −0.046 and +0.121. These results indicate that the proposed method leads to 1-minute noise-independent estimations of breast density.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital breast tomosynthesis (DBT) is an emerging x-ray breast imaging modality that scans the breast from multiple angles, allowing reconstruction of the breast's interior into a pseudo-3D image. While optimization variables in mammography are limited to x-ray tube voltage and exposure, DBT offers additional optimization possibilities such as scan angular range. Previous studies have established that wide-angle DBT excels in detecting larger objects, such as tumors, while narrow-angle DBT is superior in detecting smaller structures such as microcalcifications. Therefore, it would be advantageous to choose an option between narrow- and wide-angle scans in a patient-specific manner. In this study, we propose a method that utilizes pre-exposure scan data obtained during the automatic exposure control (AEC) process immediately before actual DBT scanning to predict patient lesion information in advance. We generated standard dose mammography and DBT pre-exposure scan using Monte Carlo-based numerical simulation. We trained a U-Net with added WGAN loss using this pair. Using this model, we synthesized pseudo-pre-exposure images from a real mammography dataset. Subsequently, a YOLO-based classification network was employed to distinguish whether masses were present or absent in the corresponding pre-exposure images. The trained network demonstrated an accuracy of 0.87 and an AUROC of 0.95, which is comparable to those of a classifier network using conventional mammography. A paired t-test also suggests that there is no statistically significant difference between the classifiers (t = 0.22). This study may contribute to enhancing breast cancer detection performance by proposing a patient-specific DBT scan range option.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital breast tomosynthesis (DBT) enables significantly higher cancer detection rates compared to full-field digital mammography (FFDM) without compromising the recall rate. However, regarding microcalcification assessment established tomosynthesis system concepts still tend to be inferior to FFDM. To further boost the clinical role of DBT in breast cancer screening and diagnosis, a system concept was developed that enables fast wide-angle DBT with the unique in-plane resolution capabilities known from FFDM. The concept comprises a novel x-ray tube concept that incorporates an adaptive focal spot position, fast flat-panel detector technology, and innovative algorithmic concepts for image reconstruction. We have built a DBT system that provides tomosynthesis image stacks and synthetic mammograms from 50° tomosynthesis scans realized in less than five seconds. In this contribution, we motivate the design of the system concept, present a physics characterization of its imaging performance, and outline the algorithmic concepts used for image processing. We conclude with illustrating the potential clinical impact by means of clinical case examples from first evaluations in Europe.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present an objective analysis and comparison on a level technological playing field of Cancer Detection Rates (CDRs) by performing a meta-analysis from publications about dense breasts using FDA-approved imaging modalities available for supplemental breast cancer screening in the USA. Awareness is growing about the relatively low overall cancer detection rate of digital mammography (DM), digital breast tomosynthesis (DBT) and breast ultrasound (US), especially for the nearly 25 million screening-eligible women with increased breast density (BIRADS C,D). Since a majority of research papers use comparisons to the screening “gold standard” of DM, analysis using pooled CDRs normalized to DM is presented. Other important factors such as the number of theoretical net lives saved using a benefit-to-risk comparison of ionizing imaging modalities is included. Lingering concerns about ionizing radiation dose of supplemental screening options are also discussed with the comparative perspective of the unavoidable yearly background dose every human being receives. This objective, normalized analysis identifies contrast-enhanced mammography (CEM) and molecular breast imaging (MBI) to have Cancer Detection Rates (CDR) within 90% (and greater) of breast magnetic resonance imaging (MRI). These top-three “vascular imaging modalities” each employ injected contrast agents to enhance visualization and facilitate the detection of early-stage breast cancers. By enabling earlier diagnosis with more appropriate supplemental breast imaging, the use of either CEM, MBI or MRI will: decrease mortality; reduce patient’s physical, financial and psychological trauma; and reduce costs per cancer detected earlier, with overall benefit to the patients, the hospitals and the payors, thus providing long-term societal benefits.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The objective of our study was to explore the feasibility of integrating artificial intelligence (AI) algorithms for breast cancer detection into a portable, point-of-care ultrasound device (POCUS). This proof-of-concept implementation is to demonstrate the platform for integrating AI algorithms into a POCUS device to achieve a performance benchmark of at least 15 frames/second. Our methodology involved the application of five AI models (FasterRCNN+MobileNetV3, FasterRCNN+ResNet50, RetinaNet+ResNet50, SSD300+VGG16, and SSDLite320+MobileNetV3), pretrained on public datasets of natural images, fine-tuned using a dataset of gelatin-based breast phantom images with both anechoic and hyperechoic lesions, mimicking real tissue characteristics. We created various gelatin-based ultrasound phantoms containing ten simulated lesions, ranging from 4-20 mm in size. Our experimental setup used the Clarius L15 scanning probe, which was connected via Wi-Fi to both a tablet and a laptop, forming the core of our development platform. The phantom data was divided into training, validation, and held-out testing sets on a per-video basis. We executed 200 timing trials for each finetuned AI model, streaming scanning video from the ultrasound probe in real-time. SSDLite320+MobileNetV3 emerged as a standout, showing a mean frame-to-frame timing of 0.068 seconds (SD=0.005), which is approximately 14.71 FPS, closely followed by FasterRCNN+MobileNetV3, with a mean timing of 0.123 seconds (SD=0.016), or about 8.13 FPS. Both models show acceptable performance in lesion localization. Compared to our goal of 15 frames/second, only the SSDLite320+MobileNetV3 architecture performed with sufficient evaluation speed to be used in real-time. Our findings show the necessity of using AI architectures designed for edge devices for real-time use, as well as the potential need for hardware acceleration to encode AI models for use in POCUS.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Earlier treatment of breast cancer results in better survival. Screening enables this through detection of tumors before they cause symptoms. On the negative side, some tumors if not detected through screening would not cause symptoms, leading to overdiagnosis. Improvements in image quality allow even earlier detection and diagnosis of even smaller tumors at an even earlier stage. This study aims to model the connection between improved image quality and cancer detection in screening, and how earlier detection of tumors affects both mortality and overdiagnosis. A Monte Carlo-based screening model was developed, simulating yearly incidence, progression and detection of breast cancer in a population of screened women from age 30 and up, using clinical data sources. To investigate the effect of increasing image quality, the model was run twice with different setting, each arm including 100 000 women: one with standard image quality and another with increased image quality, modelled as equivalent to digital breast tomosynthesis (DBT) in sensitivity and average detected tumor size. According to the simulations, increasing mammography image quality to a DBT level increases overdiagnosis by 53% in absolute terms and from 3.0% to 3.7% of screen-detected cancers in relative terms. On the other hand, prevented breast cancer deaths increases by 123%, as more patients survive the cancer treatment and die later of natural causes. The fraction of cancer patients that survive longer due to screening increases from 59.4% to 76.1% of those who eventually die from breast cancer. The model suggests that improved image quality results in better screening outcomes, but also increased overdiagnosis. Defining an optimal trade-off is very important for future screening.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Breast compression pressure (CP), computed as the force over the paddle contact area paddle, is an important measure of compression quality in mammography evidenced by associations with screening performance, including the odds of interval cancer compared to screen detected cancer. Here we introduce a novel algorithm to determine CP from processed images, the Processed Image Compression Pressure Estimator (PICPE). The aim is for PICPE outputs to align with those from an established method that estimates CP from unprocessed images such that results are comparable between image formats regardless of vendor or modality. Multiple datasets were assembled for testing of PICPE across common digital mammography (DM) and digital breast tomosynthesis (DBT) systems, representing seven different machine models from four vendors. Comparison of CP estimates derived from unprocessed and processed image pairs demonstrated excellent correlations (>0.99), with a relative difference below 5% between results from the different image formats. Uncertainties in CP estimates from variability in calibrated parameters such as the compressed breast thickness readout are expected to be substantially greater than the relative differences in estimates per image format. In future work, further testing of different image types, especially a wider variety of DBT images, should be done to confirm robust general applicability of PICPE. The results suggest that PICPE is a practical alternative algorithm for CP estimation when only processed DM or DBT images are available.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Previously, we proposed a method to use an artificial intelligent (AI) tool to integrate double reading into a single reading environment (AI-directed double reading, AID-DR). The AI tool scores each case offline, and the score is compared to the radiologist’s recall recommendation. If the AI score is above the high threshold and the radiologist did not recall the woman or if the score is below the low threshold and the radiologist recommended recall, then the case is sent to a second radiologist. In this presentation, we examined the effect of selecting the second radiologist and the low and high threshold values. We found that if the second radiologist has a high recall rate, then there is little benefit to AID-DR. However, if the second has a low recall rate, then there is benefit of slightly higher sensitivity with a lower overall recall rate for AIDDR compared to single reading, without requiring the second radiologist to read many cases. Both threshold values will affect the overall sensitivity and specificity with the biggest gains in specificity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In many mammography facilities only the processed mammograms are preserved to reduce the space requirement and cost of digital archiving. The original unprocessed “raw” mammograms are preferred for quantitative analysis, since they more faithfully represent the x-ray transmission pattern and thus the breast composition. We present the results of a machine learning algorithm that attempts to restore a raw mammogram from its processed version. In this study, 2776 paired sets of the two image types were obtained, corresponding to 635 patients. The machine learning model used was based on a U-Net with attention gates on the long skip connections. A two-pass learning approach was used. The first pass used a mean-squared error loss function with focus on the periphery of the breast, with 5 epochs and a learning rate of 10-5 to settle the network weights quickly. In a second pass, a perceptual loss function, based on features extracted from a pretrained VGG16 neural net, was used with 15 epochs and a 10-6 learning rate. When tested on central ROIs, the mean relative absolute difference (MRAD) and structural similarity index (SSIM) between the original and restored raw images were 0.04 and 0.98, respectively. On the complete (but downsampled) images, MRAD and SSIM were 0.10 and 0.99, respectively. Lesion detectability and cancer masking potential were also measured on the original and restored raw images, showing Pearson correlations of 0.89 in both cases. The algorithm shows potential for using the restored raw images from processed images for the purposes of quantitative analysis. Future work will extend the approach to higher resolution images to preserve detail and more efficient network architectures to reduce memory requirements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Early detection of breast cancer is important for improving survival rates. Based on accurate and tissue-specific risk factors, such as breast density and background parenchymal enhancement (BPE), risk-stratified screening can help identify high-risk women and provide personalized screening plans, ultimately leading to better outcomes. Measurements of density and BPE are carried out through image segmentation, but volumetric measurements may not capture the qualitative scale of these tissue-specific risk factors. This study aimed to create deep regression models that estimate the interval scale underlying the BI-RADS density and BPE categories. These models incorporate a 3D convolutional encoder and transformer layers to comprehend time-sequential data in DCE-MRI. The correlation between the models and the BI-RADS categories was evaluated with Spearman coefficients. Using 1024 patients with a BI-RADS assessment score of 3 or less and no prior history of breast cancer, the models were trained on 50% of the data and tested on 50%. The density and BPE ground truth labels were extracted from the radiology reports using BI-RADS BERT. The ordinal classes were then translated to a continuous interval scale using a linear link function. The density regression model is strongly correlated to the BI-RADS category with a correlation of 0.77, slightly lower than segmentation %FGT. The BPE regression model with transformer layers shows a moderate correlation with radiologists at 0.52, similar to the segmentation %BPE. The deep regression transformer has an advantage over segmentation as it doesn’t need time-point image registration, making it easier to use.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
High breast density is considered a risk factor for breast cancer, and it is particularly important for early detection when masses are small and difficult to see. The BI-RADS reporting system provides guidelines for standardized visual assessment of breast density. Nevertheless, such guidelines which rely in part on the delineation of dense tissues are likely to lead to variability between annotators. In this present study, we hypothesized that such variability is amplified when looking at density scores assigned by trained radiologists of different countries. The hypothesis was tested on a retrospectively collected dataset of mammography images which were assigned with a density value from 3 radiologists from France and 4 radiologists from the United States. In a further step, we used an AI-model to automatically assess density in all images, and compared predictions to annotations obtained from both countries. We then implemented a calibration procedure to adjust those predictions for regional effect. Comparing consensus-based labels between the French and the US datasets resulted in a significant difference. The proposed AI-based model, after undergoing a region-specific calibration procedure, was consistent with the expected behavior showing good agreement with French based or US based consensus respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This study investigated the potential of longitudinal mammographic breast percent density (PD) to differentiate high-risk, benign, and normal/healthy women for developing breast cancer. We used a dataset of sequential screening mammography exams of 406 women from the University of Pittsburgh Medical Cancer. Each subject had four sequential mammograms where the first three exams were cancer negative and the last had either a biopsy proven cancer (N = 116), benign lesions (rated as BI-RADS 2) (N = 70), or normal results, rated as BI-RADS 1(N = 220). We computed the PD, the ratio of dense tissue over the breast area, using our in-house breast density segmentation algorithm for four standard views of each exam. We then averaged the PD from all four views to produce an exam PD for a given subject at a given screening time point. Then, we fitted a linear curve on the PD values from three priors and extracted the slope as a measure of relative breast density changes over time. We compared the slope values of each group (cancer, benign, and normal) using a two-sample student’s t-test. We found that the slopes of all groups were negative (group medians ranged from – 0.0102 to – 0.006), meaning the breast density decreased over time. However, their rates were different. The slope values of the benign and normal groups were similar to each other (p = 0.442), but they were much faster than that of the cancer group (p < 0.032).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Breast density assessment is an important part of breast cancer risk assessment, as it has been known to correlate with risk. Mammograms would typically be assessed for density by multiple expert readers, however, interobserver variability can be high. Meanwhile, automatic breast density assessment tools are becoming more prevalent, particularly those based on artificial intelligence. We evaluate one such method against expert readers. A cohort of 1329 women going through screening was used to compare between two expert readers selected from a pool of 19, and a single such reader versus a deep learning based model. Whilst the mean differences for the two experiments were statistically similar, the limits of agreement between the AI method and a single reader are substantially lower at +SD 21 (95% CI : 20.07, 22.13) -SD 22 (95% CI : -22.95, -20.90) against +SD 31 (95% CI : 33.09, 28.91) -SD 28 (95% CI : -30.09, -25.91) between two expert readers. Additionally, the absolute intraclass correlation coefficients (two-way random multiple measures) were 0.86 (95% CI : 0.85, 0.88) between the AI and reader and 0.77 (95% CI : 0.75, 0.80) between the two readers achieving statistical significance. Our AI-driven breast density assessment tool has better inter-observer agreement with a randomly selected expert reader than two expert readers (drawn from a pool) do with one another. Additionally, the automatic method has similar inter-view agreement to experts and maintains consistency across density quartiles. Deep learning enabled density methods can offer a solution to the reader bias issue and provide consistent density scores.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Nehal Doiphode, Vinayak S. Ahluwalia, Walter C. Mankowski, Eric A. Cohen, Sarthak Pati, Lauren Pantalone, Spyridon Bakas, Ari Brooks, Celine M. Vachon, et al.
Recognizing breast density as a critical risk factor for breast cancer, traditionally assessed through subjective radiological evaluation within the BI-RADS framework, this research seeks to mitigate inter-observer variability through automated, quantitative analysis. The transition to DBT offers a quasi-3D perspective potentially enhancing the accuracy of BD assessments yet faces limitations with current FDA-cleared methods for volumetric breast density (VBD) estimation. Addressing these challenges, our work introduces a fully automated computational tool leveraging deep learning to accurately assess VBD from 3D DBT images without reliance on raw 2D data. Employing retrospective data compliant with privacy regulations, this study utilized DBT screening examinations from the Hospital of the University of Pennsylvania. The development of a three-class segmentation model, based on the U-Net architecture, was undertaken to differentiate between non-breast/background, fatty breast tissue, and dense breast tissue in DBT images. A novel two-stage training method was devised to enhance model performance, particularly in avoiding mis-segmentation issues common in high-resolution medio-lateral oblique images. This approach first utilized resized images for global shape information recognition, followed by refined segmentation using a 3D U-Net on filtered input, emphasizing accurate dense tissue identification. Our model demonstrated exemplary performance, with the Dice score—a critical metric for evaluating segmentation accuracy—revealing substantial agreement between the model's predictions and actual data. Validation of the model's effectiveness in breast cancer risk estimation was conducted through a case-control study, demonstrating a statistically significant association between DL-estimated VBD and cancer diagnosis. Additional factors, including BMI and age at screening, were also found to be significantly associated with cancer status, underscoring the multifactorial nature of breast cancer risk. The model's predictive capability was further evidenced by an AUC of 0.63, indicating good performance. The study's implications are profound, offering a clinically significant tool for personalized breast cancer risk prediction and potentially enhancing screening strategies across diverse populations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Complementary relationship between computer-aided detection (CAD) and risk prediction has been identified. To understand the factors triggering either cancer detection or risk prediction, we previously studied the performance of the deep learning (DL)-based risk prediction model, Mirai, using a feature-centric explainable AI (XAI) approach. A total of 16 calcification features were identified from Mirai as major risk factor contributors. Several studies have revealed the existence of early detection signs on prior mammograms of screen-detected and interval cancers. Accordingly, the longitudinal behavior of calcifications may further improve the understanding of the causal relationship between Mirai calcification features and elevated risk. In this study, we hypothesize that the calcification features from Mirai have the ability to capture early suspicious signs, which may be important for the risk prediction of breast cancer development. Thus, we tracked the Mirai calcification features across two screening rounds using the breast polar coordinate system. Subsequently, we assessed the ability to predict the current Breast Imaging-Reporting and Data System (BI-RADS) assessment from prior mammograms. The results show that calcification features were able to capture early suspicious signs on prior mammograms at the same location with an average polar angle difference of 13 degrees compared to the current mammograms. In addition, the calcification features were able to classify the current BI-RADS assessment with an area under the receiver operating characteristic curve (AUC) of 0.74 using prior mammograms. In conclusion, the predictive power of calcification features in short-term risk prediction may arise from their ability to detect early suspicious signs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Appalachian regions like Eastern Kentucky have high breast cancer mortality rates despite lower incidence rates compared to the rest of Kentucky. This area also experiences increased obesity rates. Previous studies have linked high breast density to a higher risk of breast cancer and obesity to poorer survival outcomes in other populations, but the relationship between BMI, breast density, and cancer progression in Appalachian Kentucky remains unclear. This retrospective study investigates these links in breast cancer patients from the region. We analyzed mammogram images of 1,405 women diagnosed with breast cancer at Markey Cancer Center between 2000-2018. Data on BMI, mammogram density, and cancer progression were collected. Mammograms were scored using the BI-RADS breast density scale. Additional data included age at diagnosis, BMI within a year of diagnosis, cancer stage, and treatment follow-up. Kaplan-Meier curves, log-rank tests, and Spearman correlation assessed the relationships among survival, BMI, and breast density. A significant negative correlation between BMI and breast density was found (Spearman correlation=-0.34; P<0.0001). However, no significant associations were observed between breast density, obesity, and overall survival. The worst survival was seen in patients with BMI≤18.5. In Appalachian Kentucky, BMI negatively correlates with breast density, but neither breast density nor obesity significantly impacts breast cancer prognosis. Underweight patients had poorer survival outcomes, suggesting factors other than obesity and breast density influence prognosis in this demographic.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent AI breast cancer risk prediction models are difficult to interpret, limiting their clinical utility. In this work we explore the explainability of an AI-based risk-prediction model by examining performance with respect to different characteristics of the future cancer. In particular, saliency maps were used to examine how often the model focused on regions coinciding with future lesions and assess the characteristics of future lesions that were most likely to coincide with AI-assigned high-risk regions. An AI model for breast cancer risk prediction was previously trained on the UK OPTIMAM dataset, achieving an AUROC of 0.70 for the task of 3-year risk prediction. Re-visiting the test set used to evaluate this model (n=31351 examinations), we obtained additional information about the future cancer cases (n=1053), including future cancer type (invasive/in-situ) and grade, and future lesion visual characteristics. Patient-level risk was compared across different cancer types and grades, and saliency maps were generated to perform a localisation study. The AI tool performed similarly for future invasive and in-situ disease, with no significant difference in risk score observed. Similarly, risk scores did not vary significantly with future cancer grade. Saliency map analysis showed that the AI-indicated high-risk regions coincided more often with the location of future obvious lesions or lesions with calcifications. The results in this work provide insights into the decision-making process of the AI risk prediction tool. Further work is required to explore additional lesion characteristics and further validate these findings.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Breast density has demonstrated to be an important risk factor for the development of breast cancer and, therefore, different fully automated density assessment tools have been introduced to obtain quantitative glandular tissue measures. Density maps (DMs) provide local tissue information, representing the amount of glandular tissue between the image receptor and the x-ray source at every pixel in the image. Usually, DMs are obtained from "for processing," i.e. raw, mammograms. This fact could become a tricky problem because this type of images are not preserved in the clinical setting. The aim of this work is to introduce a deep learning based framework to synthesize glandular tissue DMs from "for presentation" mammograms. First, the breast region is located using a dedicated object detector network. Next, a generative adversarial network is used to obtain synthetic density maps, that are useful to evaluate not only the glandular tissue distribution but also the total glandular tissue volume within the breast. Results show that synthetic DMs obtain a structural similarity index of SSIM = 0.93 ± 0.06 with respect to real images. Similarly, shared information between the real and synthetic images, computed using the histogram intersection, corresponds to HI = 0.84 ± 0.10, while the average pixel difference represents only 3.85 ± 2.78% of breast thickness. Furthermore, glandular tissue volume (GTV) obtained from synthetic density map show a strong correlation with the value provided by the real one (ρ = 0.89 [C.I 0.87 to 0.91]). In conclusion, generative deep learning models can be useful to evaluate breast composition, from local to global tissue distribution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This study proposes a method to use longitudinal breast cancer screening data to develop a 1- to 4-year breast cancer risk prediction model. It uses transfer learning from an open-source breast cancer detection model, an Autoencoder to perform dimensionality reduction as well as an LSTM network to incorporate the sequential data. The study utilizes a labelled dataset of 846 patients with up to five different mammography screening exams. The exams were taken on three systems from the vendor Siemens and the images are of the “FOR PRESENTATION” type. In this dataset there are 423 low risk cases and 423 high risk cases. A breast cancer detection model was used to obtain a latent representation of features extracted from the screening images. Dimensionality reduction was performed on the latent space using an Autoencoder architecture. The reduced latent space was then mapped to 1- to 4-year breast cancer risk with an LSTM model. The model achieved an AUC of 0.74 for differentiating high and low risk cases, outperforming the Tyrer-Cuzick model. At the reference specificity operating point of 85.4% from the Tyrer-Cuzick model, the longitudinal model achieves a sensitivity of 60%, outperforming a similar model trained by only seeing a single exam of a given patient. The incorporation of longitudinal data into breast cancer risk assessment models can increase the sensitivity to underlying patterns that are correlated to breast cancer and therefore improve breast cancer screening strategies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Dynamic contrast-enhanced breast CT (DCE-bCT) is a novel functional imaging technique that captures the wash-in and wash-out of an iodinated contrast agent in the breast. This information could be helpful for tumor diagnosis, treatment planning and response monitoring. However, the optimal acquisition and reconstruction protocol must be determined before clinical implementation. Therefore, this research aims to create a dynamic breast phantom that can simulate clinically relevant time-intensity curves (TICs) with known ground truth. A simplified breast phantom was developed using 3D printing with its outer shape based on a quality control phantom. Polylactic acid material was chosen for its properties similar to those of the skin in terms of x-ray attenuation. The phantom was filled with olive oil to simulate fatty tissue. The phantom was connected to a perfusion setup consisting of two syringe pumps to inject and withdraw contrast or water, tubing, and a container with a mixer. The setup was used with the DCEbCT system and an Iomeron contrast solution. The theoretical curve was compared to the DCE-bCT-estimated curve and showed a similar shape but different maximum enhancement (8.62 vs 6.97mg I/mL). In addition, four types of clinically relevant TICs were simulated using potassium-iodine as a contrast agent and monitored using an in-line optical spectroscopy system. To achieve accurate and repeatable TICs, the next step is to program the syringe pumps. Thereafter, the setup will be used in combination with a to-be-developed tumor phantom for further optimization of DCE-bCT.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A direct-indirect dual-layer flat-panel-detector (DI-DLFPD) was recently proposed to mitigate motion artifacts in dual-energy (DE) breast imaging. In this work, we developed a cascaded linear system model to predict the imaging performance of the DI-DLFPD and applied it to the lesion detection task in contrast-enhanced digital mammography. CsI scintillator was used in the back layer (BL) detector to acquire high-energy (HE) images, and the random variations of optical gain and blur in CsI were considered in noise propagation. The optical parameters were estimated from single x-ray imaging results previously obtained by our lab. Pre-sampling modular transfer function and normalized noise power spectrum in low-energy and HE images were modeled to derive spatial-frequency-dependent signal and noise in DE images. The detectability index (d') of iodinated objects was calculated as a function of the thicknesses of the Ag filter, front layer (FL) a-Se detector, and BL CsI scintillator. Reasonable agreement between modeled performance and measurements demonstrated the feasibility of applying the model to predict imaging performance. The results showed that employing a 100μm thick Ag filter with a 200μm thick a-Se in FL and a 400μm thick CsI in BL yields d' close to the optimum for this task. In the future, we will extend this model to more tasks like microcalcification detection to guide the optimization of the DI-DLFPD breast imaging system design.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Four-dimensional dynamic contrast-enhanced breast CT (4D DCE-bCT) holds potential for high spatio-temporal-resolution imaging for characterization and monitoring of breast tumors. This study presents a dedicated phantom-based evaluation of the accuracy of dynamic iodine concentration quantification in 4D DCE-bCT. A breast CT (bCT) system was adapted for extended acquisition times, and the x-ray spectrum was optimized (65kV/0.25mm Cu). Additionally, reconstruction and correction algorithms were developed for accurate iodine quantification. The imaging sequence involved a 10-second pre-contrast scan with 360 projections, followed by two 100-second post-contrast scans, each with 400 projection images over 10 rotations, with 10 seconds between each scan. In this experiment, we aimed to quantitatively assess the changes in iodine concentration, while a time-varying concentration of iodine (range 0.5 to 10mg I/mL) was pumped through a 5mm diameter tube in an olive-oil breast phantom. Pre- and post-contrast images were scatter-corrected and reconstructed using a polychromatic iterative reconstruction (IMPACT) combined with PICCS. This process yielded a 38-framevirtual monoenergetic (30keV) image sequence at 5-second intervals. To verify the perfusion curve accuracy, a 5 × 5 × 10 voxel VOI was compared against the known true iodine concentration. Linear fits to the results showed good precision with some under-estimation of the true concentration (wash-in: slope = 0.7530, offset = +0.5208, R2 = 0.985; wash-out: slope = 1.012, offset = -0.8295, R2 = 0.987). These findings indicate the 4D DCE-bCT's potential to provide quantitatively accurate estimates of iodine concentration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Optical imaging utilize light to analyze biological tissues in detail, non-invasively and without harmful radiation. Examples include ultrasound optical tomography and photoacoustic imaging; both use a limited number of wavelengths. Diffuse reflectance spectroscopy, another optical technique, covers a continuous wavelength range, but without generating an image. This study focuses on extended-wavelength DRS (450 to 1550nm) to compare healthy breast tissue with different subgroups of breast cancer. Analysis of 13 breast specimens with invasive ductal or lobular carcinoma reveals distinct optical profiles in tumor subgroups compared to healthy tissue. However, absorption and scattering patterns are similar among the tumor subgroups.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Following a number of reports of poor image quality in mammograms with particularly low compressed breast thicknesses in the UK National Health Service Breast Screening Programmes a pilot study was undertaken to more formally assess and quantify the problem. Stratified random sampling was used to select images from a large database of mammograms (OPTIMAM). Visual grading characteristic curves were used to compare image quality between mammograms with compressed breast thicknesses in two ranges: 55mm to 65mm inclusive and less than or equal to 20mm. It was found that breasts with a compressed thickness of 20mm or less were on average ranked lower for image quality than breasts in the 55mm to 65mm thickness range. Evidence was found indicating that in some cases the poor image quality was a result of insufficient dose under automatic exposure control. The most extreme cases contained no useful clinical information at all and arguably the affected clients will not have benefitted from attending breast screening. There is evidence that automatic exposure control systems struggle to deliver high enough exposures for dense breasts with low compressed breast thicknesses. For such cases higher dose manual factors may be beneficial. If AI readers are adopted in the future care must be taken to ensure that under-served, small subgroups such as this are not lost in the overall performance statistics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years, with the development of digital technology, radiologists compare bilateral and temporal mammograms easily to use the findings of difference as references for diagnosis. In this study, we examined the potential for clinical effectiveness of a subtraction processing system for mammograms in the comparative interpretation. We aimed to evaluate subgroup analysis on right and left subtraction-processed mammograms using VoxelMorph for registration. Breast density and thickness were used for subgroup analysis, which affected the depiction of the mammogram's mammary glands. We used normal mammograms with mediolateral oblique view divided into 1,000 cases for training, 100 cases for validation, and 500 cases for testing. The horizontally flipped left mammogram was aligned with the right mammogram using VoxelMorph. The 500 cases for testing were classified into four groups equally using the breast density and thickness determined from right mammograms. Based on the evaluation using the average sum of absolute difference (hereafter “SAD”) between the right mammogram and the transformed left mammogram as the objective index, it turned out that the highest density group had the lowest average of SAD (0.0482) and the thinnest compressed breast thickness group had the lowest average of SAD (0.0381). The results of this study showed that the accuracy of subtraction processing was high, especially in dense breasts. The high accuracy in dense breasts, where false-negative findings are common, suggests the potential for clinical effectiveness of the subtraction processing system for mammograms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The aim was to undertake a national survey of the setup of mammography imaging systems in the UK, we were particularly interested in image processing and software version. We created a program that can extract selected tags from the DICOM header. 28 medical physics departments used the program on processed images of the TORMAM phantom acquired since 2023 and this produced data for 497 systems. We received data for 7 different models of mammography systems. We found that currently in use each model had between 2 and 7 different versions of software for the acquisition workstation. Each of the systems had multiple versions of image processing settings, a preliminary investigation with TORMAM demonstrated large differences in the appearance of the image for the same x-ray model. The Fujifilm, GE and Siemens systems showed differences in the setup of the dose levels. In addition to these settings there were differences in the paddles used and grid type. Our snapshot of system set up showed that there is a potential for the images to appear differently according to the settings seen in the headers. These differences may affect the outcomes of AI and also human readers. Thus the introduction of AI must take these differences into consideration and the inevitably changes of settings in the future. There are responsibilities on AI suppliers, physics, mammographic equipment manufacturers, and breast-screening units to manage the use of AI and ensure the outcomes of breast screening are not adversely affected by the set-up of equipment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital Breast Tomosynthesis (DBT) is an imaging modality with improved breast tissue characterization, which is crucial for early cancer detection. Yet, research on dense tissue segmentation in DBT is scarce, challenged by the complexity of multi-slice variation and blurring and out-of-plane artifacts. This study introduces and validates a semi-automatic approach for breast density segmentation in DBT, aiming to enhance dataset creation and improve deep learning models for accurate breast density segmentation and volume estimation. Our semi-automated method begins with a radiologist annotating the central slice of the DBT series using a polygon mask, accompanied by the selection of a threshold value to accurately segment dense tissue portions. This initial annotation serves as a reference for extending the mask segmentation to all slices in the series, with threshold values iteratively adjusted for each slice to ensure precise and consistent segmentation. We analyzed the DBT series from 100 patients (13,094 slices), validating our approach against an independent expert radiologist’s assessments through Pearson’s correlation. For comparison, we evaluated a fixed threshold technique, which applies a manually selected threshold from the central slice to all slices in the DBT series, and a 2D CNN algorithm that was trained on 2D mammograms. Our semi-automated method showed the highest correlation (0.855-0.858, CI 0.813–0.89), surpassing the 2D CNN (0.617-0.645, CI 0.524-0.719) and fixed threshold (0.506-0.794, CI 0.39-0.84) techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this study, we assess the impact of an image restoration pipeline, designed for digital mammography, on the detectability of microcalcifications of different sizes across varied radiation exposures. The restoration pipeline first removes the noise of the image considering a Poisson-Gaussian noise model that incorporates quantum and electronic noise. Then, it appropriately merges the noisy and denoised images to achieve a signal-to-noise ratio (SNR) comparable to an image obtained at a higher radiation dose. We created a database of mammographic images acquired at radiation doses between 50% and 200% of the automatic exposure control (AEC) using a physical anthropomorphic breast phantom. Clustered microcalcifications with diameters ranging from 190𝜇m to 390𝜇m were artificially inserted into the phantom images in regions with increased density. The Channelized Hotelling Observer (CHO) was employed as the model observer (MO) to evaluate the detectability of microcalcifications. A pilot study was conducted to adjust the percentage of correct detection to approximately 75% for microcalcifications with a diameter of 270𝜇m at the AEC dose. We applied the restoration pipeline to the image dataset and calculated the percentage of correctly detected signals (PC) using the MO in a four-alternative forced choice (4-AFC) study. The results indicated a PC enhancement of up to 10% when applying restoration to simulate acquisitions with twice the AEC dose. Additionally, for images acquired with radiation doses below the AEC, our results demonstrated a potential dose reduction of up to 22.4% without compromising microcalcification detectability. The detection of microcalcifications with a diameter of 390𝜇m remained unaffected by variations in radiation dose.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Breast cancer (BC) remains the deadliest cancer for women worldwide. Neoadjuvant immunotherapies have demonstrated improved responses for some patients. Unfortunately, no robust method exists for predicting which patients will respond to immunotherapy. Imaging of diagnostic BC biopsies has revealed that the spatial distribution of tumor infiltrating lymphocytes (TILs) and other immune cells within and around the tumor can help stratify BC patients into responders and non-responders. However, clinical microscopy cannot differentiate between subtypes of TILs; numerous markers are needed to capture the heterogeneity of cancer cells and immune cells in the TME. Highly multiplexed fluorescence microscopy, or high-plex IF, has emerged as a workhorse in data collection for spatial proteomics. We present a pilot study of the TME of BC patients treated with neoadjuvant immunotherapy. Specifically in this abstract, we discuss computer vision methods for analyzing the cellular constituents probed in these complex and rich images. We discuss image stitching and channel registration for high-plex modalities, deep learning algorithms for cell detection and segmentation, and pseudo-spectral angle mapping (pSAM) for cell classification. We present strategies for accurate quantification of these images, facilitating investigations into immune activity in breast tumors with high phenotypic accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Immune phenotype data, specifically the description of densities and spatial distribution of immune cells are now frequently included in the clinical pathology report as these features of the cells in the tumor microenvironment (TME) have shown to be associated with prognosis. In addition, immune-therapeutics, which aim at manipulating the patients’ immune system to kill cancer cells, have recently been approved for treatment of triple-negative breast cancers (TNBCs). Thus, quantifying the immune phenotype of the cancer could be important both for prognostication, and for prediction of therapy response. We have studied the immune phenotype of 42 breast cancers using immunofluorescence protein multiplexing and quantitative image analysis. After sectioning, formalin-fixed paraffin-embedded tissues were sequentially stained with a panel of fluorescently-labelled antibodies and imaged with the multiplexer (Cell DIVE, Leica Biosystems). Composite images of antibody-stained sections were then analysed using specialized digital pathology software (HALO, Indica Labs). Binary thresholding was conducted to identify and quantify densities of various immune lineage subsets (T lymphocytes and macrophages). Their cellular localisation was mapped and the spatial features of cellular arrangement were evaluated using a k-nearest neighbor graph (KNNG) method and Louvain community-proximity clustering. The spatial relationship of various immune and cancer cell types was quantified to assess whether cellular arrangements and structures differed among breast cancer subtypes. Our work demonstrates the use of molecular and cellular imaging in quantifying features of the tumor microenvironment in breast cancer classification, and the application of KNNG in studying spatial biology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital breast tomosynthesis (DBT) provides pseudo-3D images by acquiring limited angle projections, thus alleviating an inherent limitation of tissue superposition in digital mammography (DM). DBT performance, however, may have limitations in terms of recovery of low-contrast structures and accuracy of material decomposition due to scatter radiation. Employing an anti-scatter grid in DBT can mitigate scatter radiation; however, this would lead to the loss of primary radiation. To compensate for the loss, an increased radiation dose is necessary. Additionally, it requires extra manufacturing costs and adds to the system’s complexity. In this work, we propose a deep-learning approach inspired by asymmetric scatter kernel superposition to estimate scatter in DBT. Unlike conventional kernel-based methods which estimate the scatter field based on the value of an individual pixel, the proposed method generates the scatter amplitude and width maps through a network. Additionally, the asymmetric factor map is also estimated from the network to accommodate local variations in conjunction with the object thickness and shape variation. Experiments demonstrate the superiority of the proposed approach. We believe the clinical impact of the proposed method is high since it can negate the additional radiation dose and the system complexity associated with integrating an anti-scatter grid in the DBT system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
While standard of care breast MRI primarily includes T1-weighted (T1w) fat-suppressed images, nonfat-suppressed images are not always included but may be needed to detect fat necrosis or fatty lesions. With the advent of abbreviated MRI to increase the accessibility of MRI for breast cancer screening, it is unlikely that imaging exams will contain both fat- and nonfat-suppressed images. Additionally, nonfat-suppressed images are integral for downstream quantitative analyses. Deep learning has seen increased use in medical imaging for contrast synthesis; however, there is limited work in the breast. This study aims to develop a reproducible, modular deep learning framework called Sat2Nu for generating nonfat-suppressed images from fat-suppressed inputs with limited training data. We retrospectively analyzed 2D slices from 643 bilateral sagittal T1w MRI screening exams with corresponding fat- and nonfat-suppressed scans from the University of Pennsylvania. One central slice was selected from each breast to yield 1,286 2D images. We trained a U-Net architecture on the entire dataset, where nonfat-suppressed images served as the ground truth. We randomly selected 20% of the data as an in-distribution validation set. The normalized root mean square error (NRMSE) and structural similarity index (SSIM) were used as performance metrics. We achieved a training NRMSE and SSIM of 0.143 and 0.855, respectively. Validation metrics on the in-distribution validation set were, respectively, 0.099 and 0.889. In conclusion, our preliminary results demonstrate representational capacity for the network to learn nonfat-suppressed contrast from fat-suppressed MRIs, which could develop into a promising solution for generating missing scans in the abbreviated setting and for downstream quantitative analyses dependent on nonfat-suppressed images. Current efforts include external validation and investigating other generative networks and loss functions for improving generalizability. Importantly, we are focusing on designing a reproducible pipeline that would allow future users to easily implement different architectures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Artificial intelligence has proven useful in the diagnosis of breast cancer from screening mammograms. This paper reports a computational method to correctly simulate training data compatible with the OMI-DB database; one of the largest databases of mammography images for medical research worldwide. Different mammography equipment has varying in-built quality that affects the noise and sharpness properties of an image. The simulation will alter an image to appear as if taken on a different detector, at a different dose or with a different image quality. A Python code has been developed to isolate the electronic, quantum and structural noise coefficients associated with digital mammography detectors for this purpose, building on previous work. A fit between noise power spectra and air kerma is used to find the noise coefficients, which are used with a random phase contribution to create noise images. These noise images are combined with flat-field signals to form simulated images. To simulate the results obtained at one dose from another, a dose factor is introduced to scale the noise contributions. Simulating a mammogram to appear as if taken under different conditions allows a more general training dataset to be created with minimal loss of biological information and without the ethical concerns of taking multiple images of a breast. A tailored dataset could be generated to facilitate an assessment of the performance of artificial intelligence tools for breast cancer detection or breast density calculations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Steadily increasing use of computer-generated models sets high demands on the realistic representation of breast tissue and abnormalities. Previously, we demonstrated the use of Perlin noise to simulate breast lesions. Now, we expand the previous model by simulating heterogeneous lesion composition. We demonstrate a new approach to simulating 3D soft tissue lesions, with additional benefits of Perlin noise in the context of virtual clinical trials. Three simulation methods have been developed: Method I represents a homogeneous lesion made up of glandular tissue (our previous model). Method II utilizes an Euclidian distance transformation to provide a layered shell construction. In Method III we assign a range of weighting functions to achieve several progressively smaller lesion volumes (“shells”). For Methods II and III, details of the background tissue were preserved within the lesion, and higher attenuation was assigned, compared to the background tissue. In a preliminary evaluation, three radiologists gave their expert opinions on the reconstructed DBT slices of the three lesions generated with Method I-III. Methods II and III proved capable of generating lesions with complex composition. For Method II the lesion appeared blended with the background tissue, whilst the lesion generated from Method III had new internal structures resembling neoplastic regions within the lesion. There was a consensus among the radiologists that the lesion in Method III looked most realistic, due to its slightly more spiculated and heterogeneous appearance, compared to Methods I and II. Optimization of the simulation methods is ongoing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Intra- and inter-patient tissue variability are seldom implemented in digital phantoms for imaging simulations, which can lead to issues when developing and evaluating material differentiation methods. In this work, we evaluated two methods for generating variability in tissue attenuation properties based on measured properties of human tissue. Our goal is to find a sampling method that generates attenuation curves within measured distributions. The first approach parameterizes tissue attenuation curves as a linear combination of aluminum and PMMA. The second approach is based on the Midgley decomposition model, where the attenuation curve is expressed in terms of five coefficients. Attenuation curves were generated by sampling the two- and five-parameter spaces, and they were compared to previous measurements in ex-vivo adipose tissue acquired at 8 , 11, 15, 20 and 30keV. The average differences of the sampled curves relative to the measurements were 1.68% (2-parameter) and 1.31% (5-parameter), and the absolute differences in coefficients of variation were under 2% for both methods. These results indicate that both methods captured the variability present in measured attenuation curves. This study provides preliminary insights into the effectiveness of two methods for adding tissue variability to imaging simulations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Prospective clinical trials on breast cancer screening take many years to provide results and are very costly. It is simply not feasible to answer all important questions by conducting a study. In addition, there are many inter-related variables that can affect screening outcomes and it is often not possible to study these individually through trials. Microsimulation modeling provides a practical alternative which allows the estimates of different key outcomes of screening in response to changes in the underlying human and technical variables. OncoSim Breast is part of a suite of specialized cancer microsimulation models developed by Statistics Canada in collaboration with The Canadian Partnership Against Cancer. The model simulates a cohort of women from birth to death through individual histories. At its heart is a mathematical function describing tumor growth. As women progress through life, at each time point, calculations are performed through random number selection, weighted by empirical probability data for each phenomenon in the simulation. The model is adapted to a particular problem by creating “scenarios” specifying the assumptions regarding the members of the cohort and any screening intervention(s) and treatment. We demonstrate how it can be helpful in optimizing screening regimens, predicting the impact of technical innovations and improvements and studying other problems where trials would be difficult or impossible to perform.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The required realism of virtual breast phantoms is likely to depend on the imaging modality and the task. This work investigates the extent to which the VICTRE breast models are suitable for the evaluation of synthetic mammography (SM) in terms of statistical texture properties and microcalcification detection performance. First, a power spectrum analysis was performed on digital breast tomosynthesis (DBT) and SM images of patients and virtual phantoms, including all four breast density categories. The fitted power law exponent 𝛽 was used to characterize breast texture. Next, calcification clusters were simulated in patient and phantom backgrounds acquired with three different DBT dose distributions applied over the projections. A human observer detectability study was performed. The power spectrum analysis showed slightly lower power law exponents for patients compared to virtual breast phantoms. The trend of 𝛽 across different density categories is similar for patient and phantom SM images. Additionally, trends in the detectability study with virtual phantoms were similar to those in the patient study, however, the absolute performance values and level of significance between the different dose distributions were not identical. Nevertheless, this suggests that the VICTRE breast phantoms are potentially valuable replacements for patients in system optimization studies for microcalcification detection in SM and DBT.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Several clinical image databases are currently available to support scientific research in the medical field. These images are generally used to validate studies based on measuring the sensitivity and specificity of a particular clinical task. In the case of digital mammography, the radiation dose directly influences the quality of the image and consequently the performance of radiologists. Therefore, it is important to conduct studies to find a balance between image quality and radiation dose. Image processing methods are typically employed to optimize this relationship. For the evaluation of these methods, it is crucial to have a mammographic image database with specific characteristics, currently unavailable for scientific use. For example, this image database should contain sets of images from the same patient acquired at different radiation doses with breast lesions in known locations. This is achievable using computational methods for noise and microcalcification insertion into pre-acquired clinical images. In this context, the present work aims to present a cloud-based application for on-demand generation of a clinical mammographic image database with different radiation doses and breast lesions. From a set of pre-acquired clinical digital mammograms, it is possible to create N databases with different characteristics. This technique can also be considered as data augmentation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital breast tomosynthesis (DBT) has become a standard screening tool for breast cancer. However, its diagnostic performance is still limited compared to contrast-enhanced MRI, especially in women with dense breasts. Ultrafast dynamic contrast-enhanced (UF DCE) MRI and diffusion-weighted imaging (DWI) offer a shorter scanning time than conventional MRI and have shown promise in previous studies. However, few reports have explored the added value of UF DCE-MRI and DWI on DBT in diagnosing breast lesions. This study aimed to assess the diagnostic performance of abbreviated MRI using UF DCE-MRI and DWI compared to DBT. The study included 53 lesions in 42 women who underwent the UF DCE-MRI protocol and DBT within three months. Two radiologists recorded the BI-RADS category and breast composition assessment of tomosynthesis, as well as the MR parameters of each lesion (MS and ADC). In addition to the inter-rater agreements, diagnostic performance was evaluated with an area under the receiver operating curve (AUC). The results showed good to excellent agreement in the inter-rater assessment of the BI-RADS category and each parameter. The AUC of the DBT BI-RADS and UF DCE-MRI BI-RADS categories were 0.84 and 0.90, respectively. The predictive model using UF DCE-MRI BI-RADS in combination with ADC of DWI demonstrated an AUC of 0.95. In conclusion, abbreviated MRI using UF DCE-MRI and DWI may have the potential to add value to DBT diagnosis in breast lesions. Further research is needed to determine the role of abbreviated MRI in the diagnosis of breast lesions in a prospective screening setting.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ultrasound Optical Tomography (UOT) combines the high-resolution imaging capability of ultrasound with measurements of light absorption and scattering properties of human tissue. This non-invasive technique could distinguish between cancerous and non-cancerous lesions inside the breast tissue, follow tumor shrinkage during pre-operative treatment, or provide information on blood oxygenation levels. Recent measurements of phantoms mimicking the optical properties of breast tissue with various lesions indicated that the technique can probe 50 mm deep through the tissue. This work concentrates on developing the UOT setup in transmission mode and discusses its advantages, limitations, and possible improvements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Simultaneous digital breast tomosynthesis and mechanical imaging (DBTMI), a novel screening approach, combines anatomic DBT with functional analysis of the stress distribution on the compressed breast by mechanical imaging (MI). Preliminary studies suggest potential to reduce false positive findings. DBTMI requires to align DBT and MI images. In this study, we have analyzed robustness to alignment variations in clinical DBTMI data. Our preliminary retrospective analysis included DBTMI of 31 women recalled from screening. We analyzed two aspects of image alignment: rotation and shift. To analyze the shift, we varied the position of suspected abnormality for ±1cm in horizontal or vertical direction. To analyze the rotation, we varied the angle by ±1 degree between radiographic and MI images of 18 women. We compared the relative mean pressure at the lesion area (RMPA) before and after variation. Varying the shift, we observed 14.3%±12.2% difference in RMPA. Averaged separately over biopsy confirmed benign and malignant lesions, 16.2%±14.3% and 12.4%±10.2% difference was observed, respectively. In nine of 31 analyzed datasets, the shift could potentially change the clinical findings. Varying the rotation, we observed 6.4%±4.9% difference in RMPA. Averaged over biopsy confirmed benign and malignant lesions, yielded 5.8%±4.5% and 6.4%±4.8% difference, respectively. In two of 18 DBTMI datasets, the rotational variation could change the clinical findings. The larger effect of the shift may be caused by a relatively large shift variation (±1 cm) compared to the size of detected abnormalities. Analysis of more clinical DBTMI datasets and simulation studies are ongoing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The purpose of this study was to investigate whether the lesion risk score provided by an AI system is influenced by the selection of exposure parameters. A breast phantom which contains a lesion, was imaged with digital mammography with different imaging conditions. The tube voltage, the dose level and the anode-filter combination were varied based on an exposure obtained with automatic exposure control. The organ dose for each image was extracted from the DICOM header. The images were analyzed with an AI system, which provided a lesion risk score (suspicion for malignancy) for each exposure condition. Correlations between the lesion risk score and the exposure conditions were investigated. The results of the study showed that the organ dose had a strong impact on the lesion risk score. Reducing the organ dose to a low level resulted in that the AI system no longer detected the lesion. Images of suboptimal quality may result in inaccurate AI system performance. In our preliminary analysis, the breast phantom and the lesion were proven to be realistic enough for being analyzed by the AI system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Anthropomorphic breast phantoms are necessary for image quality evaluation of breast x-ray- systems in realistic conditions. 3D-printing techniques are an alternative for producing breast phantoms. This work aims to experimentally determine the effective x-ray attenuation coefficient (μeff) of 3D-printing materials (PLA, ABS and HIPS), well-known breast tissue-equivalent materials (CIRS Gland and CIRS Fat) and PMMA using a clinical breast x-ray imaging system. Three target-filter combinations were considered: W/Rh 29kVp, W/Ag 30kVp and W/Al 29kVp. To validate the experimental method, effective attenuation coefficients values were obtained with numerical methods and Monte-Carlo simulations. Experimental and numerical μeff relative differences were < 10% for W/Rh and W/Ag acquisition and around 40% for W/Al, because of the scattering radiation. Experimental and MC simulated μeff relative differences were < 14% for W/Rh and W/Ag acquisition and around 2% for W/Al. The relative differences between the effective attenuation coefficients values of PLA and CIRS Gland, and ABS and CIRS Fat were found to be between 2 and 8 %. In consequence, these materials are adequate candidates materials to mimic breast glandular and adipose tissue in anthropomorphic phantoms manufacturing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Contrast-enhanced mammography (CEM) is increasingly being implemented in clinical settings as a valuable technique for cancer detection. Doses delivered during CEM examinations are the major concern. Doses are due to the low (LE) and high (HE) energy exposures required. This study is focused on comparing the average glandular doses (AGD) delivered to patients in CEM examinations performed in two centers that are equipped with an x-ray system of the same vendor. Data on 32 and 67 patients were retrospectively collected in center 1 (C1) and center 2 (C2). Most of the enrolled patients are recalled after a suspicious lesion was detected during breast screening. In both centers, mean age was 58y and mean compressed breast thickness was 60mm. AGD values were calculated following the European and IAEA protocol for LE and HE exposures which were summed to obtain the per-view AGD for the CEM. Per-patient AGD was computed for each center and for the pooled dataset. Per view AGD-LE exposures do not exceed the acceptable values proposed in the European protocol. LE(HE) contribution to the per patient AGD-CEM is 68% (32%) for CBTs<50 mm and 80% (20%) for CBTs≥50 mm. Median per patient AGD values at C1 and C2 for a bilateral CEM examination were 4.73mGy and 5.64mGy. This value is 5.51mGy for the pooled dataset.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The International Atomic Energy Agency (IAEA) assists its Member States in the application of nuclear technologies for human health, to ensure quality and safety in medical uses of radiation, including x-ray breast imaging. Ensuring quality and safety, entails support to diagnostic radiology medical physics. Relevant to the physics of x-ray breast imaging, the IAEA supports medical physics profession by providing guidance on quality assurance (QA), dosimetry and medical physics education and training, the establishment of digital mammography facilities and implementing new technologies for x-ray breast imaging, through coordinated research activities and development of training resources for medical physicists. Clinical deployment of Artificial Intelligence (AI) -based technologies in x-ray imaging highlights the need for educated and trained health professionals. The medical physicist is considered a key professional that needs to be involved as a part of a multidisciplinary team to introduce, apply, QA, and maintain AI applications effectively and safely in clinics. The recent IAEA activities are aimed at framing the roles and responsibilities of medical physicists in the field of implementation and utilization of AI in the medical uses of radiation and providing guidance to address the knowledge gap of current and future medical physicist in this field. The IAEA continues to support medical physics in x-ray breast imaging through coordinated research, the development of guidance, learning material, standardization and harmonization of quality assurance practices.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Breast cancer (BC) detectability depends on many factors: the type of cancer, breast tissue related factors, the choice and use of technology and human factors. New imaging techniques should provide higher accuracy and less false negatives. To tailor any future virtual imaging trial (VIT), a detailed description of invasive BC lesions was undertaken. In this single-institution retrospective study, imaging characteristics of 100 consecutive invasive BCs diagnosed in our hospital were assessed in terms of a visibility score, BI-RADS descriptors, breast density, lesion size and location on all breast x-ray imaging techniques and ultrasound (US). Seventy-seven out of these 100 invasive BCs were diagnosed using DBT in addition to FFDM and US and in 29 cases MRI was performed. Not all imaging modalities are equally well performing regarding visualization of invasive BC; 29 out of 77 lesions were poorly visible on FFDM, 9 on DBT, 34 on SM, and 11 on US. Four lesions were poorly visible on all these modalities, but fortunately clearly visible on MRI. The studied invasive lesions that are well visible on all modalities are mostly irregular spiculated lesions with a high density, and have in this study a median size of 18mm. The poorly visible lesions are also mostly irregularly shaped, show more variations in their margins and have a smaller median size of 12.5mm. They are equally or highly dense compared to the background tissue and are in general present in slightly denser breasts. Two lesions are not visible on mammography due to the peripheral location of the invasive breast cancer, one was located sternal and one very peripheral in the axillary tail. Both lesions were visible on ultrasound. This database provides detailed information on the imaging characteristics of invasive BCs which could be a valuable input for VITs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Breast density is an important consideration for breast cancer screening, where the amount of fibroglandular tissue in the breast can mask the detection of cancers. BI-RADS density grade estimates can result in high variability, prompting the need for an objective and reproducible assessment of breast density and tissue complexity. In this study, we investigate the utility of radiomic features to quantify texture and shape characteristics of tissue-specific regions of interest. Using Explainable AI (XAI), we identify key features for distinguishing breast density grade by computing each feature’s SHapley Additive exPlanations (SHAP) value. SHAP values measure a feature’s importance on the classifier’s prediction; the top SHAP value features from each density grade are selected as inputs to our classifier model. These features also identify relationships with clinical knowledge of breast cancer pathophysiology. Logistic regression classifiers fit to our radiomic features achieved a mean AUC per density grade class of [A : 0.949±0.055,B : 0.877±0.055,C : 0.884±0.023,D : 0.893±0.076] over nested five-fold cross-validation. Pooled confusion matrices show that class imbalance can affect the proposed method, particularly in density grades A and D. Furthermore, unsupervised clustering using Uniform Manifold Approximation and Projection (UMAP) on our radiomic feature set show inherent separability of the four density grades. The results of our preliminary analysis highlight how clinically interpretable radiomic features show promise as an important tool for breast cancer screening by preserving predictive performance while introducing AI explainability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This study utilized the concept of counterfactuals to understand the decision-making process of AI-based computer aided diagnosis (CADx) algorithms on mammogram images. Counterfactual analysis allowed us to dissect the causal relationships of these algorithms by asking questions, such as “what effect could be seen on classifier’s prediction if there is no texture (or grayed-out) inside the lesion region?”. Our purpose is not aim to classify lesions accurately; we focused on providing deeper understanding into “why” and “how” of classifier decisions, paving way for more transparent and interpretable AI in medical imaging. We used CBIS-DDSM dataset, which contains 1,318 (681 benign and 637 malignant) images for training and 378 (231 benign and 147 malignant) images for testing. We made four counterfactual cases: 1) replacing benign foreground (B FG: Benign Foreground Grayed-out) with original image’s mean intensity (MI) vs. original malignant (M), 2) replacing benign background (B BG: Benign Background Grayed-out) with original image’s MI vs. original malignant (M), 3) replacing malignant foreground (M FG: Malignant Foreground Grayed-out) with original image’s MI vs. original benign (B), and 4) replacing malignant background (M BG: Malignant Background Grayed-out) with original image’s MI vs. original benign (B). We trained three convolutional neural networks (CNNs)—MobileNet, ResNet50, and ResNet50v2—to classify benign and malignant cases (with non-counterfactual, baseline). We found that each classifier tends to be more sensitive (negatively react, i.e., degraded performance) to changes in background for benign cases (B BG) than the changes in foreground for malignant cases (M FG). Furthermore, ResNet50 demonstrated robustness (correct classification) to counterfactual modifications, signifying best AUC for B BG (AUC=0.83) then other counterparts. While, ResNet50v2 has shown robustness for the foreground changes in the benign images (B FG) with an AUC of 0.82.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Breast cancer may persist within milk ducts, known as ductal carcinoma in situ (DCIS), or advance into surrounding breast tissue, referred to as invasive ductal carcinoma (IDC). Occasionally, the invasiveness of cancer may be underestimated during biopsy, leading to adjustments in the treatment plan based upon unexpected surgical findings. Artificial intelligence (AI) and computer-aided diagnosis (CADx) techniques in medical imaging may have potential in preoperatively predicting whether a lesion is purely DCIS or exhibits a mixture of IDC and DCIS components and could serve as a valuable supplement to biopsy findings. To enhance the evaluation of AI/CADx performance, assessing variability on a lesion-by-lesion basis via a previously-established ‘sureness’ metric could add considerable value. In this study, we evaluated the performance in the task of distinguishing between pure DCIS and mixed IDC/DCIS breast cancers using computer-extracted radiomic features from dynamic contrast-enhanced magnetic resonance imaging using 0.632+ bootstrapping methods (2000 folds) on 550 lesions (135 pure DCIS, 415 mixed IDC/DCIS), and characterized the lesion-based repeatability of the prediction using sureness. The median and 95% CI of the 0.632+-corrected AUC for the task of classifying lesions as pure DCIS or mixed IDC/DCIS was 0.81 [0.75, 0.86]. Sureness varied across the dataset, with combinations of high and low classifier output and high and low sureness for some lesions. These results point to the potential for sureness to provide additional insights into the ability of CADx algorithms to pre-operatively predict whether a lesion has invasive components.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Imaging features (radiomics) have potential for predicting Triple Negative Breast Cancer and other subtypes using magnetic resonance images (MRI). This work uses 244 images from the Duke-Breast-Cancer-MRI dataset to investigate the complex interplay between radiomics feature stability, with respect to segmentation variability, and prediction results of machine learning models. Our analysis reveals that features demonstrating high stability across different segmentations tend to enhance model performance, whereas unstable features sensitive to small segmentation changes degrade predictive accuracy. This exploration underscores the importance of feature stability in the development of reliable models for breast cancer subtype classification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work presents a framework for lesion segmentation on 3D Automated Breast Ultrasound. The method consists on the implementation of a state-of-the-art foundation model for 2D segmentation pipeline called Segment anything model (SAM), adapted for 3D segmentation through a probabilistic refinement technique. The presented method obtained second place in the segmentation task of the 2023 MICCAI Challenge on Tumor Detection, Segmentation and Classification Challenge on Automated 3D Breast Ultrasound (TDSC-ABUS 2023), being the most robust approach in terms of the Hausdorff distance. The paper describes the approaches developed for the challenge submission as well as suggestions for future improvement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Breast cancer is a disease caused by abnormal growth of cells in the breast. We have investigated a deep learning pipeline, which provides classification (e.g. normal/ abnormal), and subsequently localization and segmentation of abnormalities. We have used the digital database for screening mammography in this work. The contributions of this paper are two-fold. First, we classify between normal and abnormal mammograms with a 100% training and 98.34% testing accuracy. Second, a framework is proposed to localize and segment abnormalities from abnormal images with a training loss of 0.57 and a testing loss of 0.55 where the multi-task loss function combines the loss of classification, localization, and segmentation mask.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The 3D breast ultrasound is a radiation-free and effective imaging technology for breast tumor diagnosis. However, checking the 3D breast ultrasound is time-consuming compared to mammograms. To reduce the workload of radiologists, we proposed a 2.5D deep learning-based breast ultrasound tumor classification system. First, we used the pre-trained STU-Net to finetune and segment the tumor in 3D. Then, we fine-tuned the DenseNet-121 for classification using the 10 slices with the biggest tumoral area and their adjacent slices. The Tumor Detection, Segmentation, and Classification on Automated 3D Breast Ultrasound (TDSC-ABUS) MICCAI Challenge 2023 dataset was used to train and validate the performance of the proposed method. Compared to a 3D convolutional neural network model and radiomics, our proposed method has better performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Complete removal of cancer tumors with a negative specimen margin during lumpectomy is essential to reduce breast cancer recurrence. However, 2D radiography, the current method used to assess intraoperative specimen margin status, has limited accuracy, resulting in nearly one in four patients needing repeat surgery. This study aims to develop a deep learning model that improves the detection of positive margins in intraoperative breast lumpectomy specimens on radiographs. We annotated the lumpectomy radiograph images with masking that denotes regions of known malignancy, non-malignant tissue, and the areas of pathology-confirmed positive margin. We propose a pretraining strategy, namely Forward-Forward Contrastive Learning (FFCL) with both local and global-level contrastive learning. Experimental results on our annotated breast radiographs demonstrate the effectiveness of our FFCL method in detecting positive margins from intraoperative radiographs of breast lumpectomy specimens.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Deep-learning-based models have been proposed as an automated second reader for mammograms that might help reduce radiologists’ workload and improve screening accuracy. However, the inherent traits of mammograms, characterized by significantly higher resolutions and smaller regions of interest (ROIs) in comparison to natural images, impose constraints on the adaptability of deep neural networks that are well-suited for natural image analysis to the domain of mammogram analysis. In this work, we propose a novel neural network to effectively detect breast cancer on screening mammograms and address the above issues. First, we use a local self-attention-based Swin Transformer as the backbone to select the most informative patch regions from the whole mammogram. We then utilize a second CNN based network to further extract the fine-grained features of the selected patches. Finally, we employ a fusion module that aggregates global and local information to make a prediction. The final loss function is the combination of the prediction from both the transformer and CNN modules. With local self-attention and a hierarchical structure, our backbone can effectively model the relationships between ROIs (e.g., masses or micro-calcifications) of different sizes and their surrounding tissues. Thus introduces meaningful contextual information for robust feature extraction. The experimental results show that our model can achieve state-of-the-art performance, in terms of classification AUC of 0.856 on a public mammogram dataset.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This study investigates the effectiveness of artificial intelligence (AI)-based models in detecting and quantifying Breast Arterial Calcification (BAC) from mammograms, a potential indicator of cardiovascular disease. Using two distinct subsets from the OPTIMAM database, an enriched dataset of 1683 images previously confirmed by expert readers to have lesions with non-BAC calcifications, and a ‘normal’ dataset with 1401 representative screening mammography exams, selected among those that were negative on both the included and prior exams. Manual annotation of the calcification data by four readers established ground truth. Two novel BAC detection and quantification models were tested, a baseline and enhanced model. The models exhibited promising results, particularly in terms of a low false positive rate for the enhanced model at 0.6%, but also highlighted the need for improvements to achieve a balance between sensitivity (51.0%) and specificity (99.4%). Notably, 62% of the findings missed by the enhanced model were classified as single-wall BAC, which is usually scored as minimal based on a lower association with cardiovascular disease. Future work is required to establish the association of the model performance with clinical outcomes. The study also examined the relationship between BAC prevalence and certain patient characteristics such as age and Volpara® Density Grade (VDG) in the ‘normal’ screening dataset. Significant correlations were found between BAC volume and patient age, and between BAC prevalence and VDG, which aligns with existing literature. The findings emphasize the potential of AI in improving the consistency of BAC detection with objective quantitative measures, as well as the developed model’s ability to predict the prevalence of BAC in relation to age.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Contrast-Enhanced Mammography (CEM) is an emerging breast imaging technique that utilizes dual-energy x-ray mammography with iodine contrast agent to enhance tumor visualization. This study focuses on the quantitative analysis of breast Background Parenchymal Enhancement (BPE) evolution during Neoadjuvant Chemotherapy (NAC) using a deep learning BPE quantification model in CEM. The dataset includes 72 patients undergoing NAC, and BPE levels are assessed in pre- and post-NAC CEM exams using a ResNet18-based deep learning model. Results confirm that BPE level decreases during NAC. The analysis also highlights a linear correlation between BPE change and initial pre-NAC BPE levels. This emphasizes the need to consider not only the absolute value of the BPE change, but also the BPE change residual to the linear fit for unbiased analysis of the association between the completeness of the NAC treatment response and BPE evolution. No significant association between BPE evolution and NAC treatment response was observed on the unstratified dataset. After stratifying the patient population according to age, tumor phenotype, and grade, no statistical differences were found between the distributions of BPE residuals in the pathological non-complete and complete response groups.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
When developing Deep Learning models intended for clinical applications, understanding which part of the input contributed the most to the final decision is crucial. Our study brings interpretability to a Breast Cancer Risk (BCR) prediction by exploring whether the model relies on the laterality of the breast, where cancer ultimately develops, and how this reliance evolves over time. A dataset of 1210 Full-Field-Digital-Mammography exams with 0 to 7 Years To Cancer was used. MIRAI model was employed for BCR predictions. To determine which side of the breast contributed the most to the BCR prediction, the signal difference between left and right breasts was calculated for eight attribution-based interpretability techniques. AUC was calculated to investigate whether the BCR prediction is predominantly made from the breast, where the cancer ultimately develops. For 0 to 1 Years To Cancer, the model predominantly predicts BCR based on the side of the breast where the cancer is already present AUC=0.92 to 0.95. The top-performing attribution methods achieved an AUC of 0.70 for mammograms captured 1 to 3 Years To Cancer. For exams that were 3 to 5 Years To Cancer, a significant drop to AUC of 0.57 was observed. When moving to 5 to 7 Years To Cancer, focus on the breast with future cancer becomes random. All attribution methods showed that BCR predictions extending beyond three years from screen-detected cancer are most likely based on typical breast characteristics, such as density and other long-standing tissue patterns; however, for short-term BCR predictions, the model seems to detect early signs of tumor development.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Our research is aimed at an improved understanding of mammographic abnormality classification deep learning models and how these might be related to abnormality morphology. We generated clusters of deep learned features generated by a multi-view deep learning model classifying breast cancer subtypes. This model was constructed from two ResNet50 blocks, supplemented with concatenation layers to merge the outputs of the blocks. The modelling was based on the Optimam dataset, using 2193 cases (543 DCIS and 1649 IDC samples) with supporting meta-data. We reduced the features to two dimensions using dimensional reduction techniques to facilitate visualization and evaluation in a 2-dimensional plot. Our chosen methods for dimensional reduction were Principal Component Analysis (PCA) for linear reduction and Uniform Manifold Approximation and Projection (UMAP), a non-linear manifold learning method. To identify potential trends, we adopted two analytical approaches. Firstly, we examined existing metadata to identify global or local trends within our data, and we observed that overlaying metadata describing lesions did only reveal limited discernible trends in the data (when using lesion type or abnormality classification). Secondly, we employed handcrafted features such as density and lesion area and GLCM texture features including Dissimilarity and Homogeneity to be represented as heat maps, which indicated clear patterns in the data. Clusters using heat maps, display trends within the data showing that lesions of similar characteristics are positioned locally. Additional meta data and expert evaluation is required to draw full conclusions, and future work includes investigating if the low dimensional deep learned representation is locally linked to morphological aspects of the abnormalities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Mammography is the primary screening method for lesion visualisation and detecting early changes in breast tissue. Deep learning, particularly convolutional neural networks (CNNs), are designed as tools to assist radiologists in the detection and classification of breast abnormalities. The application of deep learning models to mammography mass classification presents several challenges such as biased models caused by the lack of annotated mammographic images. We first defined the attention map of a CNN containing valuable information, especially shape knowledge from binary masks. Then we used knowledge transfer in which a CNN model transfers the attention map from binary masks to regions of interest (ROIs) to improve the performance of the CNN. When evaluating the developed approach on the BCDR dataset, DenseNet121 and ResNet-34 both achieve improved accuracy compared with the no-knowledge transfer on ROIs classification. For DenseNet121, the proposed method retrained the model with one transfer loss in the top layer and achieved improved accuracy of 71% compared to 58% for the no-knowledge transfer on ROIs classification. In addition, the resulting confusion matrix was more balanced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This study aims to develop machine learning and deep learning algorithms to segment and classify mammography images into benign and malignant types of tumors. The author will show that handcrafted features can give results similar to deep-learned features. To perform this comparison, we evaluate the performance of two kinds of algorithms. In both cases, we use multi-Otsu threshold methods to segment mammography images. The first algorithm uses a local binary pattern feature extractor, a principal component analysis algorithm to reduce the dimensions, and traditional machine learning classifiers such as the multilayered perceptron, the random forest, and the support vector machine. The second algorithm uses pre-trained convolutional neural networks, such as the AlexNet and the VGG19, along with a softmax classifier. We evaluated our algorithms on the MIAS and INbreast datasets and we found that the model that uses the local binary pattern along with a support vector machine recorded an accuracy of 56.7% and the deep learning model that uses the AlexNet along with a softmax classifier recorded an accuracy of 73%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Diagnostic mammography, conducted to assess symptoms or screen-detected lesions in women, often involves extra views beyond standard ones. Utilization of these additional views may vary across radiologists and healthcare settings. Overall, the aim of such a mammographic work-up is to provide extra imaging data, thus improving result accuracy. While artificial intelligence (AI) has demonstrated promising outcomes in cancer detection through mammographic screening, there remains a lack of evidence concerning its utilization in the diagnostic mammography context. This study aimed to investigate if using an AI-based model for diagnostic mammography could provide advantages beyond its use solely for screening mammograms. We applied an AI system, trained and validated on screening mammograms, to a dataset of diagnostic mammograms. Performance were compared to the same system applied to screening mammogram of the same patient. The findings indicate that the AI model performs similarly well when applied to non-standard views compared to standard digital mammograms. Specifically, the model demonstrates higher accuracy than the baseline and greater specificity at a given sensitivity level. This suggests that the model generalizes well on diagnostic mammograms. Understanding this generalization was important for comprehending the model's performance on diagnostic images and determining the feasibility of developing a specifically trained algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Artificial Intelligence (AI) has emerged as a valuable tool for assisting radiologists in breast cancer detection and diagnosis. However, the success of AI applications in this domain is restricted by the quantity and quality of available data, posing challenges due to limited and costly data annotation procedures that often lead to annotation shifts. This study simulates, analyses and mitigates annotation shifts in cancer classification in the breast mammography domain. First, a high-accuracy cancer risk prediction model is developed, which effectively distinguishes benign from malignant lesions. Next, model performance is used to quantify the impact of annotation shift. We uncover a substantial impact of annotation shift on multiclass classification performance particularly for malignant lesions. We thus propose a training data augmentation approach based on single-image generative models for the affected class, requiring as few as four in-domain annotations to considerably mitigate annotation shift, while also addressing dataset imbalance. Lastly, we further increase performance by proposing and validating an ensemble architecture based on multiple models trained under different data augmentation regimes. Our study offers key insights into annotation shift in deep learning breast cancer classification and explores the potential of single-image generative models to overcome domain shift challenges.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Evaluation of an AI-based model to estimate cancer risk that aims at improving early detection of cancer by leveraging information untapped by detection models. We converted a breast cancer detection model into a risk model with light architectural changes and by using the survival analysis / time to event paradigm within the machine learning framework. The new model is able to predict cumulative risk function of a breast/patient from mammogram images. A longitudinal dataset of 2,460 positive patients and 5,466 negative patients over average timespan avg 4.6 years and q75 = 5.5 years, q90 = 7.1 years), independent from our training set, is used to evaluate the performance of our approach. We compare our methods against the open source baseline MIRAI, considered as the state of the art. To do so we used both concordance index aka. C-index and dynamic AUC restricted to the 5 year range that MIRAI model allows. We obtain a concordance index of 0.758 (ci=(0.752, 0.763)). While the baseline reaches a concordance index of 0.736 (ci=(0.730, 0.743)). Regarding cumulative dynamic AUC, our AI model reach 0.796 (ci=(0.791, 805)) remaining close to MIRAI, which is at 0.801 (ci=(0.794, 0.810)). Our model demonstrates performance similar to the state of the art with few modifications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.