| Literature DB >> 35591941 |
Peiying Guo1,2, Longfei Li1,2,3, Cheng Li3, Weijian Huang3, Guohua Zhao2, Shanshan Wang3, Meiyun Wang2,4, Yusong Lin2,5,6.
Abstract
Accurate preoperative glioma grading is essential for clinical decision-making and prognostic evaluation. Multiparametric magnetic resonance imaging (mpMRI) serves as an important diagnostic tool for glioma patients due to its superior performance in describing noninvasively the contextual information in tumor tissues. Previous studies achieved promising glioma grading results with mpMRI data utilizing a convolutional neural network (CNN)-based method. However, these studies have not fully exploited and effectively fused the rich tumor contextual information provided in the magnetic resonance (MR) images acquired with different imaging parameters. In this paper, a novel graph convolutional network (GCN)-based mpMRI information fusion module (named MMIF-GCN) is proposed to comprehensively fuse the tumor grading relevant information in mpMRI. Specifically, a graph is constructed according to the characteristics of mpMRI data. The vertices are defined as the glioma grading features of different slices extracted by the CNN, and the edges reflect the distances between the slices in a 3D volume. The proposed method updates the information in each vertex considering the interaction between adjacent vertices. The final glioma grading is conducted by combining the fused information in all vertices. The proposed MMIF-GCN module can introduce an additional nonlinear representation learning step in the process of mpMRI information fusion while maintaining the positional relationship between adjacent slices. Experiments were conducted on two datasets, that is, a public dataset (named BraTS2020) and a private one (named GliomaHPPH2018). The results indicate that the proposed method can effectively fuse the grading information provided in mpMRI data for better glioma grading performance.Entities:
Mesh:
Year: 2022 PMID: 35591941 PMCID: PMC9113909 DOI: 10.1155/2022/7315665
Source DB: PubMed Journal: J Healthc Eng ISSN: 2040-2295 Impact factor: 3.822
Figure 1Overview of the proposed framework.
Figure 2The process of graph convolution operation.
Figure 3Image examples of 4-sequence MRI from BraTS2020 and GliomaHPPH2018 datasets.
Demonstration of the graph convolution module can fuse contextual information in MRI with different imaging parameters to improve the performance of the CNN models.
| Dataset | Method | GoogLeNet-Inceptionv3 | EfficientNet-b3 | ResNet-34 | AlexNet | VGGNet-16 | |||||
|---|---|---|---|---|---|---|---|---|---|---|---|
| AUC | ACC | AUC | ACC | AUC | ACC | AUC | ACC | AUC | ACC | ||
| BraTS2020 | T1CE | 0.911 | 0.893 | 0.921 | 0.893 | 0.932 | 0.907 | 0.910 | 0.880 | 0.915 | 0.867 |
|
|
|
|
|
|
|
|
|
|
|
| |
| T1 | 0.851 | 0.813 | 0.884 | 0.867 | 0.896 | 0.840 | 0.804 | 0.800 | 0.854 | 0.827 | |
|
|
|
|
|
|
|
|
|
|
|
| |
| T2 | 0.893 | 0.867 | 0.892 | 0.853 | 0.849 | 0.840 | 0.867 | 0.853 | 0.876 | 0.840 | |
|
|
|
|
|
|
|
|
|
|
|
| |
| FLAIR | 0.897 | 0.827 | 0.885 | 0.853 | 0.894 | 0.853 | 0.802 | 0.827 | 0.889 | 0.840 | |
|
|
|
|
|
|
|
|
|
|
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
|
| |||||||||||
| GliomaHPPH2018 | T1CE | 0.810 | 0.787 | 0.952 | 0.851 | 0.842 | 0.787 | 0.802 | 0.809 | 0.867 | 0.787 |
|
|
|
|
|
|
|
|
|
|
|
| |
| T1 | 0.898 | 0.872 | 0.923 | 0.872 | 0.921 | 0.894 | 0.729 | 0.766 | 0.933 | 0.872 | |
|
|
|
|
|
|
|
|
|
|
|
| |
| T2 | 0.692 | 0.766 | 0.798 | 0.745 | 0.729 | 0.745 | 0.629 | 0.660 | 0.856 | 0.766 | |
|
|
|
|
|
|
|
|
|
|
|
| |
| FLAIR | 0.873 | 0.809 | 0.967 | 0.872 | 0.931 | 0.872 | 0.838 | 0.766 | 0.879 | 0.851 | |
|
|
|
|
|
|
|
|
|
|
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Note. G means to fuse context information with GCN. The results of our study are shown in bold.
Demonstration of the context information fusion method based on graph convolution and 3D convolution.
| Dataset | Method | T1CE | T1 | T2 | FLAIR | ||||
|---|---|---|---|---|---|---|---|---|---|
| AUC | ACC | AUC | ACC | AUC | ACC | AUC | ACC | ||
| BraTS2020 | 2D-ResNet | 0.932 | 0.907 | 0.896 | 0.840 | 0.849 | 0.840 | 0.894 | 0.853 |
| 3D-ResNet | 0.939 | 0.920 | 0.905 | 0.853 | 0.860 | 0.880 | 0.853 | 0.867 | |
|
|
|
|
|
|
|
|
|
| |
|
| |||||||||
| GliomaHPPH2018 | 2D-ResNet | 0.842 | 0.787 | 0.921 | 0.894 | 0.729 | 0.745 | 0.931 | 0.872 |
| 3D-ResNet | 0.844 | 0.809 | 0.960 | 0.894 | 0.750 | 0.766 | 0.923 | 0.872 | |
|
|
|
|
|
|
|
|
|
| |
The performance of three methods of fusing mpMRI context and multiparameter information in two different datasets is compared.
| Dataset | Method | Base-ResNet | |
|---|---|---|---|
| AUC | ACC | ||
| BraTS2020 | N(1) | 0.949 | 0.920 |
| N(2) | 0.970 | 0.920 | |
|
|
|
| |
|
| |||
| GliomaHPPH2018 | N(1) | 0.965 | 0.894 |
| N(2) | 0.969 | 0.936 | |
|
|
|
| |
The results of our study are shown in bold.
Figure 4Methodology for realizing mpMRI context and multiparameter information fusion simultaneously. (a) N(1). (b) N(2).
Figure 5Classification accuracy of the validation set under different CNN models and vertex feature dimensionality of GCN. (a) BraTS2020 dataset. (b) GliomaHPPH2018 dataset.
Figure 6Classification accuracy of the validation set under different CNN models and GCN iterations. (a) BraTS2020 dataset. (b) GliomaHPPH2018 dataset.