| Literature DB >> 35280828 |
Shunchao Guo1,2, Lihui Wang1, Qijian Chen1, Li Wang1, Jian Zhang1, Yuemin Zhu3.
Abstract
Purpose: Glioma is the most common primary brain tumor, with varying degrees of aggressiveness and prognosis. Accurate glioma classification is very important for treatment planning and prognosis prediction. The main purpose of this study is to design a novel effective algorithm for further improving the performance of glioma subtype classification using multimodal MRI images. Method: MRI images of four modalities for 221 glioma patients were collected from Computational Precision Medicine: Radiology-Pathology 2020 challenge, including T1, T2, T1ce, and fluid-attenuated inversion recovery (FLAIR) MRI images, to classify astrocytoma, oligodendroglioma, and glioblastoma. We proposed a multimodal MRI image decision fusion-based network for improving the glioma classification accuracy. First, the MRI images of each modality were input into a pre-trained tumor segmentation model to delineate the regions of tumor lesions. Then, the whole tumor regions were centrally clipped from original MRI images followed by max-min normalization. Subsequently, a deep learning-based network was designed based on a unified DenseNet structure, which extracts features through a series of dense blocks. After that, two fully connected layers were used to map the features into three glioma subtypes. During the training stage, we used the images of each modality after tumor segmentation to train the network to obtain its best accuracy on our testing set. During the inferring stage, a linear weighted module based on a decision fusion strategy was applied to assemble the predicted probabilities of the pre-trained models obtained in the training stage. Finally, the performance of our method was evaluated in terms of accuracy, area under the curve (AUC), sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), etc.Entities:
Keywords: decision fusion; deep learning; glioma classification; multimodal MRI images; tumor segmentation
Year: 2022 PMID: 35280828 PMCID: PMC8907622 DOI: 10.3389/fonc.2022.819673
Source DB: PubMed Journal: Front Oncol ISSN: 2234-943X Impact factor: 6.244
Figure 1The structure of our proposed MMIDFNet.
Figure 2An example of glioma patients on multimodal MRI images (patient ID: CPM19_CBICA_AAB_1, Glioblastoma). (A) Original images. (B) Tumor segmentation on panel (A). (C) Ground truth on panel (A). (D) Normalized followed centrally clipped from panel (A) based on panel (B).
Figure 3The receiver operating characteristic (ROC) curves of unimodal prediction models on three validation folds using our MMIDFNet method. (A) T1 modality. (B) T2 modality. (C) T1ce modality. (D) Flair modality.
Three-fold cross-validation performance of unimodal prediction models using radiomics and our proposed MMIDFNet.
| Methods | Modality | Fold | ACC | AUC | SEN | SPE | PPV | NPV |
|---|---|---|---|---|---|---|---|---|
| Radiomics |
| 1 | 0.730 | 0.734 | 0.546 | 0.792 | 0.807 | 0.865 |
| 2 | 0.689 | 0.755 | 0.522 | 0.765 | 0.748 | 0.844 | ||
| 3 | 0.699 | 0.737 | 0.610 | 0.792 | 0.682 | 0.812 | ||
|
|
|
|
|
|
|
| ||
| 95% CI | [0.672, 0.740] | [0.724, 0.760] | [0.487, 0.632] | [0.758, 0.808] | [0.646, 0.846] | [0.798, 0.883] | ||
|
| 1 | 0.703 | 0.775 | 0.554 | 0.787 | 0.657 | 0.824 | |
| 2 | 0.743 | 0.827 | 0.596 | 0.816 | 0.722 | 0.875 | ||
| 3 | 0.712 | 0.712 | 0.560 | 0.820 | 0.615 | 0.843 | ||
|
|
|
|
|
|
|
| ||
| 95% CI | [0.686, 0.753] | [0.679, 0.863] | [0.534, 0.606] | [0.779, 0.836] | [0.578, 0.751] | [0.806, 0.889] | ||
|
| 1 | 0.838 | 0.908 | 0.706 | 0.890 | 0.867 | 0.928 | |
| 2 | 0.784 | 0.841 | 0.650 | 0.854 | 0.770 | 0.905 | ||
| 3 | 0.822 | 0.856 | 0.756 | 0.903 | 0.752 | 0.902 | ||
|
|
|
|
|
|
|
| ||
| 95% CI | [0.770, 0.859] | [0.812, 0.925] | [0.619, 0.789] | [0.842, 0.923] | [0.697, 0.895] | [0.889, 0.934] | ||
|
| 1 | 0.730 | 0.788 | 0.570 | 0.792 | 0.763 | 0.881 | |
| 2 | 0.685 | 0.718 | 0.557 | 0.787 | 0.625 | 0.813 | ||
| 3 | 0.743 | 0.740 | 0.585 | 0.810 | 0.756 | 0.888 | ||
|
|
|
|
|
|
|
| ||
| 95% CI | [0.671, 0.768] | [0.691, 0.806] | [0.548, 0.593] | [0.777, 0.816] | [0.590, 0.839] | [0.794, 0.927] | ||
| MMIDFNet |
| 1 | 0.757 | 0.724 | 0.572 | 0.821 | 0.813 | 0.894 |
| 2 | 0.689 | 0.696 | 0.509 | 0.767 | 0.663 | 0.890 | ||
| 3 | 0.712 | 0.780 | 0.516 | 0.777 | 0.701 | 0.873 | ||
|
|
|
|
|
|
|
| ||
| 95% CI | [0.664, 0.775] | [0.665, 0.802] | [0.477, 0.588] | [0.742, 0.834] | [0.601, 0.850] | [0.868, 0.904] | ||
|
| 1 | 0.743 | 0.835 | 0.542 | 0.794 | 0.742 | 0.907 | |
| 2 | 0.730 | 0.749 | 0.560 | 0.820 | 0.687 | 0.854 | ||
| 3 | 0.726 | 0.788 | 0.591 | 0.822 | 0.642 | 0.853 | ||
|
|
|
|
|
|
|
| ||
| 95% CI | [0.719, 0.747] | [0.722, 0.860] | [0.525, 0.604] | [0.787, 0.837] | [0.610, 0.770] | [0.822, 0.921] | ||
|
| 1 | 0.838 | 0.907 | 0.764 | 0.885 | 0.842 | 0.909 | |
| 2 | 0.824 | 0.885 | 0.667 | 0.897 | 0.749 | 0.934 | ||
| 3 | 0.836 | 0.883 | 0.694 | 0.900 | 0.859 | 0.929 | ||
|
|
|
|
|
|
|
| ||
| 95% CI | [0.821, 0.845] | [0.870, 0.913] | [0.628, 0.788] | [0.881, 0.907] | [0.722, 0.911] | [0.903, 0.945] | ||
|
| 1 | 0.770 | 0.782 | 0.669 | 0.855 | 0.755 | 0.866 | |
| 2 | 0.703 | 0.750 | 0.537 | 0.767 | 0.813 | 0.896 | ||
| 3 | 0.753 | 0.752 | 0.640 | 0.852 | 0.673 | 0.869 | ||
|
|
|
|
|
|
|
| ||
| 95% CI | [0.686, 0.798] | [0.733, 0.790] | [0.504, 0.726] | [0.745, 0.905] | [0.634, 0.860] | [0.851, 0.903] |
ACC, accuracy; SEN, sensitivity; SPE, specificity; PPV, positive predictive value; NPV, negative predictive value; CI, confidence interval; Bold Value, average value of 3 folds.
Figure 4The receiver operating characteristic (ROC) curves of multimodal prediction models on three validation folds in our study. (A) Radiomics model. (B) Data fusion model. (C) Decision fusion model.
Three-fold cross-validation performance of multimodal prediction models using radiomics, data fusion strategy, and our proposed MMIDFNet methods.
| Models | Fold | ACC | AUC | SEN | SPE | PPV | NPV | Kappa |
|---|---|---|---|---|---|---|---|---|
| Radiomics | 1 | 0.851 | 0.874 | 0.702 | 0.885 | 0.905 | 0.945 | 0.699 |
| 2 | 0.824 | 0.875 | 0.705 | 0.897 | 0.793 | 0.922 | 0.672 | |
| 3 | 0.836 | 0.862 | 0.706 | 0.914 | 0.724 | 0.927 | 0.695 | |
|
|
|
|
|
|
|
|
| |
| 95% CI | [0.815,0.859] | [0.859,0.882] | [0.701, | [0.875,0.922] | [0.661,0.954] | [0.912,0.951] | [0.665,0.712] | |
| Data | 1 | 0.865 | 0.898 | 0.732 | 0.913 | 0.890 | 0.943 | 0.740 |
| 2 | 0.838 | 0.879 | 0.744 | 0.926 | 0.741 | 0.922 | 0.713 | |
| 3 | 0.836 | 0.871 | 0.717 | 0.908 | 0.745 | 0.921 | 0.695 | |
|
|
|
|
|
|
|
|
| |
| 95% CI | [0.820,0.872] | [0.860,0.905] | [0.709, | [0.901,0.931] | [0.656,0.928] | [0.909,0.949] | [0.680,0.752] | |
| Decision fusion | 1 | 0.892 | 0.902 | 0.781 | 0.919 | 0.924 | 0.959 | 0.789 |
| 2 | 0.865 | 0.909 | 0.741 | 0.926 | 0.821 | 0.949 | 0.749 | |
| 3 | 0.877 | 0.896 | 0.795 | 0.946 | 0.842 | 0.939 | 0.780 | |
|
|
|
|
|
|
|
|
| |
| 95% CI | [0.856,0.900] | [0.892,0.913] | [0.727, | [0.908,0.953] | [0.775,0.949] | [0.933,0.965] | [0.739,0.806] |
ACC, accuracy; SEN, sensitivity; SPE, specificity; PPV, positive predictive value; NPV, negative predictive value; CI, confidence interval; Bold Value, average value of 3 folds.
Figure 5Comparison of three-fold cross-validation performance of the three multimodal prediction models.
Performance comparison of other state-of-the-art studies with ours.
| Metrics | Pei et al. ( | Xue et al. ( | Pei et al. ( | Yin et al. ( | Radiomics | Data fusion | Decision fusion |
|---|---|---|---|---|---|---|---|
| F1_score | 0.829 | 0.771 | 0.771 | 0.857 | 0.837 | 0.846 | |
| Balanced_Acc | 0.749 | NA | 0.698 | 0.820 | 0.704 | 0.731 | |
| Kappa | 0.715 | NA | 0.627 | 0.767 | 0.689 | 0.716 |
CI, confidence interval; NA, not available; Bold Value, best value of the metric.