Data scarcity and data imbalance are two major challenges in training deep learning models on medical images, such as brain tumor MRI data. The recent advancements in generative artificial intelligence have opened new possibilities for synthetically generating MRI data, including brain tumor MRI scans. This approach can be a potential solution to mitigate the data scarcity problem and enhance training data availability. This work focused on adapting the 2D latent diffusion models to generate 3D multi-contrast brain tumor MRI data with a tumor mask as the condition. The framework comprises two components: a 3D autoencoder model for perceptual compression and a conditional 3D Diffusion Probabilistic Model (DPM) for generating high-quality and diverse multi-contrast brain tumor MRI samples, guided by a conditional tumor mask. Unlike existing works that focused on generating either 2D multi-contrast or 3D single-contrast MRI samples, our models generate multi-contrast 3D MRI samples. We also integrated a conditional module within the UNet backbone of the DPM to capture the semantic class-dependent data distribution driven by the provided tumor mask to generate MRI brain tumor samples based on a specific brain tumor mask. We trained our models using two brain tumor datasets: The Cancer Genome Atlas (TCGA) public dataset and an internal dataset from the University of Texas Southwestern Medical Center (UTSW). The models were able to generate high-quality 3D multi-contrast brain tumor MRI samples with the tumor location aligned by the input condition mask. The quality of the generated images was evaluated using the Fréchet Inception Distance (FID) score. This work has the potential to mitigate the scarcity of brain tumor data and improve the performance of deep learning models involving brain tumor MRI data.
PurposeDeep learning has shown promise for predicting the molecular profiles of gliomas using MR images. Prior to clinical implementation, ensuring robustness to real-world problems, such as patient motion, is crucial. The purpose of this study is to perform a preliminary evaluation on the effects of simulated motion artifact on glioma marker classifier performance and determine if motion correction can restore classification accuracies.ApproachT2w images and molecular information were retrieved from the TCIA and TCGA databases. Simulated motion was added in the k-space domain along the phase encoding direction. Classifier performance for IDH mutation, 1p/19q co-deletion, and MGMT methylation was assessed over the range of 0% to 100% corrupted k-space lines. Rudimentary motion correction networks were trained on the motion-corrupted images. The performance of the three glioma marker classifiers was then evaluated on the motion-corrected images.ResultsGlioma marker classifier performance decreased markedly with increasing motion corruption. Applying motion correction effectively restored classification accuracy for even the most motion-corrupted images. Motion correction of uncorrupted images exceeded the original performance of the network.ConclusionsRobust motion correction can facilitate highly accurate deep learning MRI-based molecular marker classification, rivaling invasive tissue-based characterization methods. Motion correction may be able to increase classification accuracy even in the absence of a visible artifact, representing a new strategy for boosting classifier performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.