Since the non-specificity of acute bilirubin encephalopathy (ABE), accurate classification based on structural MRI is intractable. Due to the complexity of the diagnosis, multi-modality fusion has been widely studied in recent years. The most current medical image classification researches only fuse image data of different modalities. Phenotypic features that may carry useful information are usually excluded from the model. In the paper, a multi-modal fusion strategy for classifying ABE was proposed, which combined the different modalities of MRI with clinical phenotypic data. The baseline consists of three individual paths for training different MRI modalities i.e., T1, T2, and T2-flair. The feature maps from different paths were concatenated to form multi-modality image features. The phenotypic inputs were encoded into a two-dimensional vector to prevent the loss of information. The Text-CNN was applied as the feature extractor of the clinical phenotype. The extracted text feature map will be concatenated with the multi-modality image feature map along the channel dimension. The obtained MRI-phenotypic feature map is sent to the fully connected layer. We trained/tested (80%/20%) the approach on a database containing 800 patients data. Each sample is composed of three modalities 3D brain MRI and its corresponding clinical phenotype data. Different comparative experiments were designed to explore the fusion strategy. The results demonstrate that the proposal achieves an accuracy of 0.78, a sensitivity of 0.46, and a specificity of 0.99, which outperforms the model using MRI or clinical phenotype as input alone. Our work suggests the fusion of clinical phenotype data and image data can improve the performance of ABE classification.
Are there any abnormal reflection in the structural Magnetic Resonance Imaging(sMRI) of patients with autism spectrum disorder (ASD)? Although a few brain regions have been somehow implicated in the pathophysiologic mechanism of the disorder, the gold-standard for diagnosis based on sMRI has not been reached in the academic community. Recently, the powerful deep learning algorithms have been widely studied and applied, which provides a chance to explore the brain structural abnormalities of ASD by the visualization based on the deep learning model. In this paper, a 3D-ResNet with an attention subnet for ASD classification is proposed. The model combined the residual module and the attention subnet to mask the regions which are relevant or irrelevant to the classification during the feature extraction. The model was trained and tested by sMRI from Autism Brain Imaging Data Exchange (ABIDE). The result of 5-fold cross-validation shows an accuracy of 75%. The Grad-CAM was further applied to display the emphasized composition of the model during classification. The class activation mapping of multiple slices of the representation sMRI was visualized. The results show that there are high related signals in the regions near the hippocampus, corpus callosum, thalamus, and amygdala. This result may confirm some of the previous hypotheses. The work is not only limited to the classification of ASD but also attempts to explore the anatomic abnormality with a quite promising visualization-based deep learning approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.