Open Access Paper
11 September 2024 Game theory-based fusion of multimodal magnetic resonance imaging features for glioma grading
Jiehong Liu, Danlin Lin, Qiqi Lu, Yuanyao Xie, Haolin Chen, Chao Ke, Jing Li, Xiaofei Lv, Yanqiu Feng
Author Affiliations +
Proceedings Volume 13270, International Conference on Future of Medicine and Biological Information Engineering (MBIE 2024); 132700B (2024) https://doi.org/10.1117/12.3046634
Event: 2024 International Conference on Future of Medicine and Biological Information Engineering (MBIE 2024), 2024, Shenyang, China
Abstract
We introduced a novel multimodal radiomic feature fusion method based on game theory, specifically designed for glioma grading using multimodal magnetic resonance images. In a retrospective analysis, 257 patients (204 with high-grade gliomas [HGG] and 53 with low-grade gliomas [LGG]) were used as a training cohort, while internal and external test cohorts comprised 111 patients (88 HGG, 23 LGG) and 136 patients (114 HGG, 22 LGG), respectively. Imaging included T1-weighted, T2-weighted, fluid-attenuated inversion recovery, and contrast-enhanced T1-weighted sequences performed on 1.0 T, 1.5 T, and 3.0 T MR systems from public datasets and an additional 3.0 T MR system. Radiomic features were extracted from regions of interest across modalities. The proposed method leverages game theory to fuse multimodal MRI features, followed by a three-step feature selection process. Predictive models were subsequently built and validated on both internal and external cohorts. Area under the receiver operating characteristic curve (AUC), accuracy, F1 score, sensitivity, and specificity were calculated for evaluating the model performance. The fused feature model achieved an AUC of 0.953 (95% confidence interval: 0.906-1.000) for the internal test set and 0.853 (95% confidence interval: 0.777- 0.929) for the external test set. This feature fusion method shows promising robustness for glioma grading in radiomics using multimodal magnetic resonance images.

1.

INTRODUCTION

Gliomas, the most common primary brain tumors in the central nervous system, are categorized from Grade I to IV [1] according to cellular activity and malignancy. These tumors exhibit varying degrees of aggressiveness, influencing treatment strategies and prognosis. This classification distinguishes between low-grade gliomas (LGG, grades I and II) and high-grade gliomas (HGG, grades III and IV) [2]. Accurate glioma grading is crucial for treatment planning, personalized therapy, and prognosis prediction [3, 4]. Currently, the gold standard for preoperative glioma grading relies on the histopathological analysis of biopsies. However, this approach is invasive, lacks real-time capability, and poses risks including infection and potential tumor seeding along the biopsy tract [5]. Consequently, the development of a non-invasive and timely grading system for preoperative diagnosis of glioma grades holds considerable significance.

Magnetic resonance imaging (MRI) plays an important role in diagnosing, guiding treatment decisions, and predicting prognosis in glioma patients, offering potential as a non-invasive grading tool [6]. The standard MR images for glioma treatment usually encompass four modalities: T1-weighted imaging (T1), T2-weighted imaging (T2), fluid-attenuated inversion recovery imaging (FLAIR), and T1-weighted contrast enhanced imaging (T1C). The assessment through visual inspection of conventional MR images is subjective and influenced by the experience of radiologists.

Radiomics is an advanced technique that extracts high-throughput quantitative features from medical images to support clinical decision-making [79]. Over the past decade, MRI-based radiomics has been employed in studies focusing on preoperative glioma grades [10, 11]. Numerous investigations have explored glioma grading using the quantitative features derived from multimodal MRI [1214]. These radiomic features extracted from medical images can be uniformly regarded as multi-view data. Different views often contain both complementary and consensus information. Despite the predominant approach in current radiomics, which involves concatenating features from multiple modalities into extended vectors and then analyzing them through feature engineering, it may not harness the full potential of intermodal information.

In recent years, there have been a few studies on multimodal feature fusion. Amini et al. pursued features from two different approaches of feature concatenation and feature averaging for overall survival prediction of non-small cell lung carcinoma patients [15]. Haghighat et al. proposed discriminant correlation analysis (DCA) to incorporate the class associations for various biometric identification in correlation analysis of the feature sets [16]. Wang et al. constructed prediction models using dimensionality reduction method that leverage the maximum subset of multiparametric MRI ra1diomic features for bladder cancer tumor grades. Besides, deep learning autonomously accomplishes feature fusion through techniques such as convolutional and pooling. However, Traditional methods may struggle to capture the inter-modal heterogeneity, and deep learning methods, on the other hand, may lack interpretability.

In this study, we conducted study aiming to overcome these challenges by introducing game theory into radiomic feature modality fusion.

2.

DATA AND METHODS

2.1

Patient cohorts

The current study utilizes data from the multimodal brain tumor segmentation (BraTS) challenge[17, 18] and the other part of the dataset is from Sun Yat-sen University Cancer Center (SYSUCC). These sources provide comprehensive imaging data, essential for advancing the accuracy and reliability of glioma segmentation methodologies. We randomly split the BraTS into training and internal test data, and the SYSUCC dataset employed as an external testing set. The MR data of glioma patients admitted from November 2018 to March 2022 to SYSUCC. Ethical approval of the SYSUCC dataset was obtained for this retrospective study and informed consent from patients was waived. Figure 1 shows the specific details of patient enrolment pathway.

Figure 1.

Enrolment pathway for patients in two centers

00171_PSISDG13270_132700B_page_2_1.jpg

2.2

Preprocessing and segmentation

To minimize heterogeneity, all MR images were standardized by reorienting to the left-posterior superior coordinate system, co-registering to the T1 anatomical template from the MNI152 atlas, resampling to a 1mm isotropic resolution, and applying skull stripping using the brain extraction tool [18, 19]. Each imaging dataset was manually segmented by one to four raters following a consistent annotation protocol, with all annotations validated by experienced neuroradiologists. The delineation process for each ROI in the SYSUCC dataset involved collaboration between two readers with 3 and 4 years of experience in brain tumor diagnosis, respectively, and was reviewed by a senior reader with 11 years of experience. Based on the BraTS challenge guidelines, three types of ROIs were considered: non-enhancing tumor, enhancing tumor, and edema. This study focuses on the combined analysis of these glioma regions.

2.3

Feature fusion

This study considered three categories of radiomics features, encompassing shape features, first-order statistics features, and texture features. The texture features included features derived from the grey level co-occurrence matrix (GLCM), grey level run length matrix (GLRLM), grey level size zone matrix (GLSZM), grey level dependence matrix (GLDM), and neighbor grey tone difference matrix (NGTDM). Prior to feature extraction, seven types of image filters were applied, including the origin filter (no filter applied), Laplacian of Gaussian (LoG) filter, Square filter, square root filter, exponential filter, gradient filter, and wavelet filter. In the case of LoG filtering, a parameter sigma, which emphasizes either fine or coarse textures, was set to 3, 4, or 5. Wavelet filtering involved 8 decompositions per level, considering all possible combinations of applying either a high or a low pass filter in each of the three dimensions. The grey level within the region of interest was quantized separately with a specific bin count of 32. A total of 1470 features were extracted for each MR sequence, resulting in 5880 features per patient.

Before utilizing the extracted features, batch effect correction was applied to alleviate batch effects from different center datasets[20]. Shapley values, derived from coalitional game theory, address how groups of individuals can collaboratively work towards a common goal [21]. This explanatory method integrates optimal credit allocation with local explanations by calculating Shapley values. In this context, each feature value of the subject is treated as a player in a cooperative game, where the prediction represents the total payout. The significance of a feature, denoted as z, is quantified using the Shapley value, defined as follows:

00171_PSISDG13270_132700B_page_3_1.jpg

where F is the complete set of all features, A represents the subsets obtained from F without the feature z. f(A) is the output of a specific model to be explained using A. ψz is the Shapley value or the contribution of feature z. The proposed feature Shapley analysis (FSA) aims to estimate modality weights to fuse features from multiple classifiers based on training prediction performance and Shapley values. Specifically, let us define N classifiers Ci. The predictive performance of each classifier during the training phase can be quantified using evaluation metrics, including accuracy, area under the receiver operating characteristic curve (AUC), F1 score, specificity, and sensitivity. These metrics collectively form an evaluation matrix Ei,j(i = 1, 2, …, N, j = 1, 2, …, K) that provides a comprehensive assessment of the classifier’s effectiveness. It is organized as a matrix with rows corresponding to N classifiers and columns representing K evaluation criteria. The modality importance weight of each classifier can be described as Si,k(i = 1, 2, …, N, k = 1, 2, …, M), where M represents the number of modalities. The FSA estimates the fusion weights ωk of each modality by using

00171_PSISDG13270_132700B_page_3_2.jpg

where the ci and Si,k are firstly normalized as

00171_PSISDG13270_132700B_page_3_3.jpg
00171_PSISDG13270_132700B_page_3_4.jpg

In (3), Emax,j and Emin,j are defined as

00171_PSISDG13270_132700B_page_3_5.jpg
00171_PSISDG13270_132700B_page_3_6.jpg

Here, Emax,j and Emin,j are the Euclidean distance between the vectors 00171_PSISDG13270_132700B_page_3_7.jpg, for each classifier i, where emax and emin represent the maximum and minimum values for each criterion, respectively, and are defined as follows:

00171_PSISDG13270_132700B_page_4_1.jpg
00171_PSISDG13270_132700B_page_4_2.jpg

In (4), Si,k is a matrix of N rows of ψk vectors with rows and columns representing N classifiers and M modality features with Shapley value. With the normalized weights ωk, the final fusion feature 00171_PSISDG13270_132700B_page_4_3.jpg can be calculated as:

00171_PSISDG13270_132700B_page_4_4.jpg

There were totally 1470 features for each patient after fusion.

2.4

Model construction

We implemented a coarse-to-fine feature selection strategy to reduce the dimensionality of the feature space to mitigate the risk of overfitting. Initially, a univariate logistic regression (LR) analysis was conducted to assess the correlation between individual features and glioma grades. Features with statistically significant correlations (p < 0.05) were retained for further analysis. Next, we employed a minimum redundancy and maximum relevance feature selection framework to diminish mutual redundancy among features while preserving those with maximum relevance [22]. In the final stage of feature selection, we utilized an embedding method, specifically the multivariate LR with L1-regularization, to identify the most informative features.

To identify the most effective prediction model, we employed four machine-learning algorithms for constructing combined models: LR, random forest (RF), support vector machine (SVM), and gradient boosting tree (GBT). It’s worth noting that our dataset exhibited a degree of class imbalance, with the size of the minority class approximately one-fourth that of the majority class. To address the class imbalance, we applied the synthetic minority oversampling technique [23] for data augmentation, ensuring an equal ratio of HGG and LGG. During the training phase, a 10-fold cross-validation approach was implemented to determine hyperparameters for the models.

The predictive model performance was evaluated using different metrics such as the receiver operating characteristic (ROC) curve, AUC accuracy, F1 score, sensitivity, and specificity. Both internal and external test cohorts were used to validate the generalizability of the predictive model. All above were conducted using the Python programming language (version 3.6.13). The specific flowchart is shown in Figure 2.

Figure 2.

Overview of the modality fusion analysis framework for the grade of gliomas.

00171_PSISDG13270_132700B_page_5_1.jpg

3.

RESULTS

3.1

Patient characteristics

We utilized the available section of the BraTS 2020 dataset, reserving 70% for the training set. In the development of our glioma grading model, cross-validation was implemented within a training set comprising 257 patients with gliomas (204 HGGs and 53 LGGs). The remaining 30% of the dataset was designated for internal model testing. For the independent external testing dataset, we retrospectively gathered 136 patients who underwent MRI examinations and received pathological confirmation of gliomas. It’s essential to note that each patient contributed four MRI modalities, including T1, T2, FLAIR, and T1C. Consequently, a total of 2016 MR images from 504 glioma patients were included in this study. Table 1 provides a summary of the datasets employed.

Table 1.

Summary of the dataset used in this study.

DatasetSourceHGGLGGTotal
Train setBraTS2020204 (79.38%)53 (20.62%)257
Internal test setBraTS202088 (79.28%)23 (20.72%)111
External test setSYSUCC114 (83.82%)22 (16.18%)136

3.2

Model evaluation

In Figure 3, ROC curves illustrate the performance of all models in the training cohort, internal test cohort, and external test cohort. LR exhibits a slightly inferior overall performance. RF demonstrates strong performance across all datasets, achieving an AUC of 0.864 on the external validation set. SVM attains the highest AUC on the internal validation set (0.951). In addition, we carried out integrated learning of above models and get the relatively best model. The integration method we used is the voting strategy and the voting ensemble model (VEM) achieved the best AUCs of 0.954 on the internal validation set and 0.856 on the external validation set.

Figure 3.

Receiver operating characteristic (ROC) curves of prediction performance of modality SHAP model.

00171_PSISDG13270_132700B_page_5_2.jpg

3.3

Fusion interpretability

To visualize the features fusion process, we generated a cumulative histogram illustrating the fusion of modal features, as depicted in Figure 4. This plot conveys the cumulative results of SHAP values assigned to features selected from multiple modalities before normalization. The visualization highlights the varying contributions of different modalities to each feature; certain modalities exhibit a significant impact on specific features, while others have a more modest influence.

Figure 4.

Histogram and cumulative histogram of the modal weights of contribution among radiomic features.

00171_PSISDG13270_132700B_page_6_1.jpg

To quantitatively elucidate the ensemble model’s capability in distinguishing glioma grades, we employed the SHAP method to assess feature importance and their contributions to the grading process. Figure 5 illustrates the SHAP summary plot of features post-feature selection, highlighting their relative significance and impact on the model. The y-axis represents multimodal features ranked by importance, while the x-axis depicts SHAP values. Each point on the plot corresponds to a sample’s SHAP value, with color coding indicating the magnitude of each feature. The analysis reveals that features derived from GLDM, first-order statistics, GLRLM, GLCM, and GLSZM—extracted from filtered and transformed MRI images—are critical in differentiating glioma grades. Additionally, certain features show pronounced correlations between LGG and HGG patients, with higher feature values often associated with a greater likelihood of HGG. For example, a lower value of wavelet-HHL_gldm_LargeDependenceHighGrayLevelEmphasis, synthesized from four modalities, correlates with a higher SHAP value, suggesting an increased probability of HGG development.

Figure 5.

SHAP summary plots of the top 15 features.

00171_PSISDG13270_132700B_page_7_1.jpg

To gain a deeper understanding of how the interaction between features influences the model output, SHAP dependence contribution plots can unveil these interactions’ impact. We chose the most critical feature (square_glrlm_ShortRunEmphasis) to elucidate this impact. As illustrated in Fig. 6 (a), the red points represent patients with higher values of interactive features, while the blue points represent those with lower values. Fig. 6 (a) demonstrates that the SHAP value for higher square_glszm_ZoneEntropy is elevated when square_glrlm_ShortRunEmphasis is less than zero, indicating a lower risk of HGG. Conversely, when square_glrlm_ShortRunEmphasis surpasses one, patients with smaller square_glszm_ZoneEntropy may be developing HGG. In Fig. 6 (b), it is evident that the SHAP value for square_ngtdm_Busyness increases as square_glrlm_ShortRunEmphasis decreases, signifying a decreased likelihood of developing HGG.

Figure 6.

Scatter plots of fusion features SHAP dependence analysis. (a) is impact of fused square_glszm_ZoneEntropy and square_glrlm_ShortRunEmphasis on final model output, and (b) is impact of fused square_ngtdm_Busyness and square_glrlm_ShortRunEmphasis on final model output.

00171_PSISDG13270_132700B_page_7_2.jpg

3.4

Comparison with different fusion methods

We compare the proposed FSA with the following fusion methods. First, the current mainstream methods of features concatenation are compared. Then, simple and effective fusion methods, maximum and average value of features which always used in pooling of deep learning [24, 25] were also compared. Additionally, discriminant correlation analysis (DCA) [26] associates class relationships with the correlation analysis of feature sets, effectively implementing feature fusion. DCA maximizes the pairwise correlation between two feature sets while eliminating inter-class correlation and restricting correlation within each class. In addition, two deep learning fusion methods were implemented to compare with our proposed fusion method. Cheng et al proposed e a glioma grading model termed as multimodal disentangled variational autoencoder (MDVAE) [27] by integrating the complementary information from multimodal MRI images. Liang et al. proposed an association-based fusion method (AF) [28] for multi-modal classification, aimed at integrating complementary information from various modalities to enhance classification performance. This method leverages the strengths of each modality, thereby improving the overall accuracy and robustness of the classification system. After implementing the four methods mentioned above, following the same feature selection and model training procedures, we obtained the comparative results for each model as shown in Table 2.

Table 2.

Comparison method result.

MethodSetAUC [95% CI]AccuracyF1 scoreSpecificitySensibility
JointTrain0.997 [0.993-1.000]0.9810.9880.9620.965
Internal test0.940 [0.879-1.000]0.9100.9440.7390.955
External test0.745 [0.627-0.863]0.7280.8230.5910.754
MaxTrain0.980[0.959-1.000]0.9300.9550.9060.936
Internal test0.939[0.894-0.985]0.8830.9270.6960.932
External test0.723[0.599-0.847]0.6910.7920.6360.702
AverageTrain0.982[0.966-0.999]0.9460.9660.8680.966
Internal test0.935[0.872-0.998]0.8920.9330.6520.955
External test0.706[0.573-0.840]0.7280.8260.5000.772
DCATrain0.991[0.982-0.999]0.9650.9780.9060.980
Internal test0.941[0.896-0.986]0.8900.9390.6520.966
External test0.756[0.639-0.874]0.7570.8470.5450.798
MDVAETrain0.997[0.992-1.000]0.9930.9930.9900.995
Internal test0.945[0.901-0.989]0.9190.9510.7520.989
External test0.828[0.696-0.961]0.8180.9130.7170.854
AFTrain0.997[0.992-1.000]0.9950.9950.9950.995
Internal test0.948[0.898-0.999]0.9100.9450.7520.977
External test0.839[0.728-0.951]0.8370.8700.7550.851
ProposedTrain0.996 [0.991-1.000]0.9730.9830.9430.980
Internal test0.953 [0.906-1.000]0.9280.9550.8260.955
External test0.853 [0.777-0.929]0.8380.8990.7270.860

4.

DISCUSSION

The core principle of Shapley values involves evaluating the contribution of each feature to the model output by averaging the marginal contributions across all possible feature combinations. This method ensures a fair and comprehensive assessment of each feature’s impact on the model’s predictions. Different modalities represent information from different sensors or data sources, and the same feature may exhibit varying importance across these modalities. The concept behind using Shapley values for fusion is that it provides a fair and consistent way to integrate the influence of features from each modality reasonably into the final model interpretation. We trained multiple models to obtain Shapley values for the modal features of each respective model. Naturally, employing multiple models aims to mitigate biases and differences introduced by a single model fit. The metrics obtained through training are measured against the Euclidean distance from the theoretically optimal metric to assess the models.

This approach helps overcome issues such as feature missingness or different scales among different modalities, resulting in more consistent and interpretable model explanations. By combining the Shapley values from different modalities, we gain a more comprehensive understanding of the impact of each feature on the model output in the fused interpretation, thereby enhancing our understanding of the overall system behavior. This fusion method provides a powerful and flexible means for the integrated analysis of multimodal data, allowing for a more accurate reflection of the combined characteristics of the data.

In this study, the ensemble model constructed with fused features provides a more interpretable performance for glioma grading. The fused features integrate information from different modalities, offering the model a more comprehensive and rich input, thereby enhancing the accuracy and comprehensiveness of the model’s interpretation for glioma grading. This fusion strategy aids in capturing underlying relationships between different modalities, providing the model with a more consistent and robust feature representation, allowing it to better understand the complex characteristics of gliomas. The fused features also contribute to a more comprehensive understanding of the important features for glioma grading. By analyzing the model’s contribution to each fused feature, we can identify which features have the most significant impact on different grades. This in-depth understanding helps reveal the role of features from different modalities in grading and provides valuable clues for further medical research and diagnosis. Notably, T1C, displaying enhanced clinical information due to the contrast agent, is prominently featured in the plot, with each T1C feature playing a substantial role in the amalgamation of modalities.

Through the interpretability of the ensemble model, we can clearly identify the contributions of each fused feature to different grades. The interpretability of the fused features enables us to better understand the basis of the model for glioma grading. This clear interpretation helps doctors, researchers, and decision-makers comprehend the model’s decision-making process, thereby increasing trust in the grading results. In clinical practice, interpretability of model decisions is crucial, especially in the medical field, as it can provide support to clinicians, making it easier for them to accept the model’s recommendations. Therefore, the role of fused features in enhancing model interpretability is crucial, providing more reliable support for medical decision-making.

The BraTS data we used for training is obtained from various scanners through different clinical protocols and from multiple institutions (n=19). The fused features derived from these data have greater applicability. The generalizability of this method makes it have broad prospects for application in multiple domains. Different data sources, multimodal information, data from different time points, etc., can all be fused using this method, providing a universal and powerful approach for various complex tasks in multimodal learning. The widespread application of this method will further expand the research field of multi-view learning, offering new possibilities for future interdisciplinary research and practical applications.

This approach is not only limited to glioma grading in MRI multimodal views but can also be widely applied to other multi-view learning tasks. In comparison to traditional single-view learning, this method can effectively handle multimodal information from different sensors or data sources, providing a more comprehensive data perspective and offering the model a more accurate and comprehensive feature representation. In the field of medical imaging, beyond multimodal views in MRI, this method is equally applicable to other medical image data, such as CT images. It demonstrates strong versatility for information from the same lesion at different time points, extracting additional information between modalities, thereby aiding in improving the accuracy and comprehensiveness of lesion understanding. In the biomedical field, it can also be applied to signal data from different frequency bands in electrocardiography, effectively integrating information from these frequency bands to enhance the diagnostic performance of models for conditions like heart diseases[29]. Additionally, this method can be applied in remote sensing image analysis, handling remote sensing images from multiple sensors to help models better capture the diversity of surface changes [30].

While our proposed method has achieved significant interpretability and performance improvement in glioma grading tasks, it is important to recognize some limitations. Firstly, our interpretability analysis primarily focuses on glioma grading tasks, and further validation is needed for its adaptability to other tasks. The data characteristics and requirements of different tasks may result in differences in the model’s interpretability performance. Therefore, careful evaluation is necessary for applying this method to other medical imaging tasks or different domains. Secondly, despite using multi-center, multi-institutional BraTS data for model construction and incorporating an external validation set during model training, more validation is still needed to assess the robustness of our proposed algorithm. In real clinical scenarios, medical imaging data may be influenced by different devices, scanning parameters, and data acquisition conditions, making the robustness of the model crucial for widespread application. Future work should involve expanding the validation dataset and conducting diversity testing to evaluate the model’s performance in various contexts. Additionally, our fusion method operates at the feature level rather than extracting information from the hierarchical structure of the source data. This design decision may lead to the model losing some crucial detailed information, necessitating further research into more fine-grained fusion strategies to overcome this potential limitation. Lastly, although we obtained a feature fusion table, which is relatively straightforward for industrial use, the modality fusion using multiple classifiers on a per-feature basis may incur some time overhead. Future work could explore more efficient implementation approaches to enhance the scalability and practical applicability of the method.

5.

CONCLUSION

In conclusion, our research provides a novel solution for multimodal medical image fusion analysis. However, it is crucial to acknowledge the existing limitations and work towards further refinement and improvement in future research. This will help address the specific requirements of different tasks and real-world application scenarios.

REFERENCES

[1] 

Ostrom QT, Gittleman H, Liao P, “CBTRUS Statistical Report: Primary Brain and Central Nervous System Tumors Diagnosed in the United States in 2007-2011,” Neuro-Oncol, 16 (suppl 4), iv1 –iv63 (2014). https://doi.org/10.1093/neuonc/nou223 Google Scholar

[2] 

Louis DN, Perry A, Wesseling P, “The 2021 WHO Classification of Tumors of the Central Nervous System: a summary,” Neuro-Oncol, 23 1231 –1251 (2021). https://doi.org/10.1093/neuonc/noab106 Google Scholar

[3] 

Hu H, Mu Q, Bao Z, “Mutational Landscape of Secondary Glioblastoma Guides MET-Targeted Trial in Brain Tumor,” Cell, 175 1665 –1678 (2018). https://doi.org/10.1016/j.cell.2018.09.038 Google Scholar

[4] 

Jian A, Jang K, Manuguerra M, Liu S, Magnussen J, Di Ieva A., “Machine Learning for the Prediction of Molecular Markers in Glioma on Magnetic Resonance Imaging: A Systematic Review and Meta-Analysis,” Neurosurgery, 89 31 –44 (2021). https://doi.org/10.1093/neuros/nyab103 Google Scholar

[5] 

Acharya S, Liu J-F, Tatevossian RG, “Risk stratification in pediatric low-grade glioma and glioneuronal tumor treated with radiation therapy: an integrated clinicopathologic and molecular analysis,” Neuro-Oncol, 22 1203 –1213 (2020). https://doi.org/10.1093/neuonc/noaa031 Google Scholar

[6] 

Tian Q, Yan L-F, Zhang X, “Radiomics strategy for glioma grading using texture features from multiparametric MRI: Radiomics Approach for Glioma Grading,” J Magn Reson Imaging, 48 1518 –1528 (2018). https://doi.org/10.1002/jmri.v48.6 Google Scholar

[7] 

Lambin P, Rios-Velazquez E, Leijenaar R, “Radiomics: Extracting more information from medical images using advanced feature analysis,” Eur J Cancer, 48 441 –446 (2012). https://doi.org/10.1016/j.ejca.2011.11.036 Google Scholar

[8] 

Aerts HJWL, Velazquez ER, Leijenaar RTH, “Decoding tumour phenotype by noninvasive imaging using a quantitative radiomics approach,” Nat Commun, 5 4006 (2014). https://doi.org/10.1038/ncomms5006 Google Scholar

[9] 

Gillies RJ, Kinahan PE, Hricak H., “Radiomics: Images Are More than Pictures, They Are Data,” Radiology, 278 563 –577 (2016). https://doi.org/10.1148/radiol.2015151169 Google Scholar

[10] 

Choi YS, Bae S, Chang JH, “Fully automated hybrid approach to predict the IDH mutation status of gliomas via deep learning and radiomics,” Neuro-Oncol, 23 304 –313 (2021). https://doi.org/10.1093/neuonc/noaa177 Google Scholar

[11] 

Li Y, Liu Y, Liang Y, “Radiomics can differentiate high-grade glioma from brain metastasis: a systematic review and meta-analysis,” Eur Radiol, 32 8039 –8051 (2022). https://doi.org/10.1007/s00330-022-08828-x Google Scholar

[12] 

Cheng J, Liu J, Yue H, Bai H, Pan Y, Wang J., “Prediction of Glioma Grade using Intratumoral and Peritumoral Radiomic Features from Multiparametric MRI Images,” IEEE/ACM Trans Comput Biol Bioinform, 2020 1 –1 Google Scholar

[13] 

Ma L, Xiao Z, Li K, Li S, Li J, Yi X., “Game theoretic interpretability for learning based preoperative gliomas grading,” Future Gener Comput Syst, 112 1 –10 (2020). https://doi.org/10.1016/j.future.2020.04.038 Google Scholar

[14] 

Ding J, Zhao R, Qiu Q, “Developing and validating a deep learning and radiomic model for glioma grading using multiplanar reconstructed magnetic resonance contrast-enhanced T1-weighted imaging: a robust, multi-institutional study,” Quant Imaging Med Surg, 12 1517 –1528 (2022). https://doi.org/10.21037/qims Google Scholar

[15] 

Amini M, Nazari M, Shiri I, “Multi-level multi-modality (PET and CT) fusion radiomics: prognostic modeling for non-small cell lung carcinoma,” Phys Med Biol, 66 205017 (2021). https://doi.org/10.1088/1361-6560/ac287d Google Scholar

[16] 

Haghighat M, Abdel-Mottaleb M, Alhalabi W., “Discriminant correlation analysis for feature level fusion with application to multimodal biometrics,” In 2016 IEEE Int Conf Acoust Speech Signal Process ICASSP, 2016 1866 –1870 IEEE, Shanghai Google Scholar

[17] 

Menze BH, Jakab A, Bauer S, “The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS),” IEEE Trans Med Imaging, 34 1993 –2024 (2015). https://doi.org/10.1109/TMI.2014.2377694 Google Scholar

[18] 

Bakas S, Akbari H, Sotiras A, “Advancing The Cancer Genome Atlas glioma MRI collections with expert segmentation labels and radiomic features,” Sci Data, 4 170117 (2017). https://doi.org/10.1038/sdata.2017.117 Google Scholar

[19] 

Evans AC, Janke AL, Collins DL, Baillet S., “Brain templates and atlases,” NeuroImage, 62 911 –922 (2012). https://doi.org/10.1016/j.neuroimage.2012.01.024 Google Scholar

[20] 

Johnson WE, Li C, Rabinovic A., “Adjusting batch effects in microarray expression data using empirical Bayes methods,” Biostatistics, 8 118 –127 (2007). https://doi.org/10.1093/biostatistics/kxj037 Google Scholar

[21] 

Lundberg SM, Lee S-I, “A unified approach to interpreting model predictions,” Adv Neural Inf Process Syst, 30 (2017). Google Scholar

[22] 

Hanchuan Peng, Fuhui Long, Ding C., “Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy,” IEEE Trans Pattern Anal Mach Intell, 27 1226 –1238 (2005). https://doi.org/10.1109/TPAMI.2005.159 Google Scholar

[23] 

Chawla NV, Bowyer KW, Hall LO, Kegelmeyer WP., “SMOTE: Synthetic Minority Over-sampling Technique,” J Artif Intell Res, 16 321 –357 (2002). https://doi.org/10.1613/jair.953 Google Scholar

[24] 

Lecun Y., “Gradient-Based Learning Applied to Document Recognition,” Proc IEEE, 86 (1998). https://doi.org/10.1109/5.726791 Google Scholar

[25] 

Krizhevsky A, Sutskever I, Hinton GE., “ImageNet classification with deep convolutional neural networks,” Commun ACM, 60 84 –90 (2017). https://doi.org/10.1145/3065386 Google Scholar

[26] 

Haghighat M, Abdel-Mottaleb M, Alhalabi W., “Discriminant Correlation Analysis: Real-Time Feature Level Fusion for Multimodal Biometric Recognition,” IEEE Trans Inf Forensics Secur, 11 1984 –1996 (2016). https://doi.org/10.1109/TIFS.2016.2569061 Google Scholar

[27] 

Cheng, J, Gao, M, Liu, J, “Multimodal disentangled variational autoencoder with game theoretic interpretability for glioma grading,” IEEE Journal of Biomedical and Health Informatics, 26 (2), 673 –684 (2021). https://doi.org/10.1109/JBHI.2021.3095476 Google Scholar

[28] 

Liang X, Qian Y, Guo Q, Cheng H and Liang J., “AF: An association-based fusion method for multi-modal classification,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 44 (12), 9236 –9254 (2021). https://doi.org/10.1109/TPAMI.2021.3125995 Google Scholar

[29] 

Kaplan Berkaya S, Uysal AK, Sora Gunal E, Ergin S, Gunal S, Gulmezoglu MB, “A survey on ECG analysis,” Biomed Signal Process Control, 43 216 –235 (2018). https://doi.org/10.1016/j.bspc.2018.03.003 Google Scholar

[30] 

Ghassemian H., “A review of remote sensing image fusion methods,” Inf Fusion, 32 75 –89 (2016). https://doi.org/10.1016/j.inffus.2016.03.003 Google Scholar
(2024) Published by SPIE. Downloading of the abstract is permitted for personal use only.
Jiehong Liu, Danlin Lin, Qiqi Lu, Yuanyao Xie, Haolin Chen, Chao Ke, Jing Li, Xiaofei Lv, and Yanqiu Feng "Game theory-based fusion of multimodal magnetic resonance imaging features for glioma grading", Proc. SPIE 13270, International Conference on Future of Medicine and Biological Information Engineering (MBIE 2024), 132700B (11 September 2024); https://doi.org/10.1117/12.3046634
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Feature fusion

Magnetic resonance imaging

Data modeling

Radiomics

Back to Top