| Literature DB >> 35571867 |
Qianqian Wang1, Long Li2, Lishan Qiao1, Mingxia Liu3.
Abstract
Major depressive disorder (MDD) is one of the most common mental health disorders that can affect sleep, mood, appetite, and behavior of people. Multimodal neuroimaging data, such as functional and structural magnetic resonance imaging (MRI) scans, have been widely used in computer-aided detection of MDD. However, previous studies usually treat these two modalities separately, without considering their potentially complementary information. Even though a few studies propose integrating these two modalities, they usually suffer from significant inter-modality data heterogeneity. In this paper, we propose an adaptive multimodal neuroimage integration (AMNI) framework for automated MDD detection based on functional and structural MRIs. The AMNI framework consists of four major components: (1) a graph convolutional network to learn feature representations of functional connectivity networks derived from functional MRIs, (2) a convolutional neural network to learn features of T1-weighted structural MRIs, (3) a feature adaptation module to alleviate inter-modality difference, and (4) a feature fusion module to integrate feature representations extracted from two modalities for classification. To the best of our knowledge, this is among the first attempts to adaptively integrate functional and structural MRIs for neuroimaging-based MDD analysis by explicitly alleviating inter-modality heterogeneity. Extensive evaluations are performed on 533 subjects with resting-state functional MRI and T1-weighted MRI, with results suggesting the efficacy of the proposed method.Entities:
Keywords: feature adaptation; major depressive disorder; multimodal data fusion; resting-state functional MRI; structural MRI
Year: 2022 PMID: 35571867 PMCID: PMC9100686 DOI: 10.3389/fninf.2022.856175
Source DB: PubMed Journal: Front Neuroinform ISSN: 1662-5196 Impact factor: 3.739
Figure 1Illustration of the proposed adaptive multimodal neuroimage integration (AMNI) framework, including (1) a graph convolutional network (GCN) for extracting features of functional connectivity networks derived from resting-state functional MRI (rs-fMRI) data, (2) a convolutional neural network (CNN) for extracting features of T1-weighted structural MRI (sMRI) data, (3) a feature adaptation module for alleviating inter-modality difference by minimizing a cross-modal maximum mean discrepancy (MMD) loss, and (4) a feature fusion module for integrating sMRI and fMRI features for classification. MDD, major depressive disorder; HC, healthy control.
Demographic and clinical information of subjects from Southwest University [a part of the REST-meta-MDD consortium (Yan et al., 2019)].
|
|
|
|
|
|
|
|
|---|---|---|---|---|---|---|
| MDD | 99 M | 38.7 ± 13.6 | 10.8 ± 3.6 | 209 (Y)/49 (N) | 124 (Y)/125 (N) | 50.0 ± 65.9 |
| 183 F | 24 (D) | 33 (D) | 35 (D) | |||
| HC | 87 M | 39.6 ± 15.8 | 13.0 ± 3.9 | − | − | − |
| 164 F |
Values are reported as Mean ± Standard deviation. M, Male; F, Female; Y, Yes; N, No; D, Lack of record.
Architecture of the CNN module in the proposed AMINI framework.
|
|
|
|
|
|
|---|---|---|---|---|
| Input | – | – | 121 ×145 ×121 | 1 |
| C1 | 3 ×3 ×3 | 1 | 121 ×145 ×121 | 16 |
| M1 | 2 ×2 ×2 | 2 | 60 ×72 ×60 | 16 |
| C2 | 3 ×3 ×3 | 1 | 60 ×72 ×60 | 32 |
| M2 | 2 ×2 ×2 | 2 | 30 ×36 ×30 | 32 |
| C3 | 3 ×3 ×3 | 1 | 30 ×36 ×30 | 64 |
| M3 | 2 ×2 ×2 | 2 | 15 ×18 ×15 | 64 |
| C4 | 3 ×3 ×3 | 1 | 15 ×18 ×15 | 128 |
| M4 | 2 ×2 ×2 | 2 | 7 ×9 ×7 | 128 |
| GAP | – | – | 1 ×1 ×1 | 128 |
| FC | – | 1 ×1 ×1 | 64 |
Cn, the n-th convolutional layer; Mn, the n-th max pooling layer; GAP, global average pooling; FC, fully-connected layer.
Classification results in terms of “mean (standard deviation)” achieved by ten methods in MDD vs. HC classification, with best results shown in bold.
|
|
|
|
|
|
|
|
|
|
|
|---|---|---|---|---|---|---|---|---|---|
| PCA+SVM-s | S | 0.566 (0.011) | 0.669 (0.021) | 0.456 (0.007) | 0.563 (0.010) | 0.580 (0.006) | 0.553 (0.017) | 0.618 (0.013) | 0.591 (0.008) |
| EC+SVM | F | 0.560 (0.014) | 0.651 (0.009) | 0.462 (0.029) | 0.557 (0.015) | 0.577 (0.013) | 0.539 (0.018) | 0.609 (0.009) | 0.586 (0.019) |
| CC+SVM | F | 0.574 (0.007) | 0.674 (0.018) | 0.470 (0.014) | 0.572 (0.006) | 0.589 (0.005) | 0.562 (0.011) | 0.625(0.009) | 0.597(0.014) |
| DC+SVM | F | 0.578 (0.014) | 0.676 (0.019) | 0.477 (0.016) | 0.577 (0.017) | 0.593 (0.015) | 0.568 (0.021) | 0.627 (0.014) | 0.605 (0.015) |
| PCA+SVM-f | F | 0.570 (0.011) | 0.653 (0.014) | 0.483 (0.019) | 0.568 (0.012) | 0.588 (0.010) | 0.554 (0.016) | 0.614 (0.009) | 0.602 (0.013) |
| PP+SVM | SF | 0.593 (0.026) | 0.675 (0.022) | 0.502 (0.036) | 0.588 (0.027) | 0.605 (0.026) | 0.578 (0.030) | 0.636 (0.022) | 0.631 (0.027) |
| 2DCNN | F | 0.613 (0.013) | 0.670 (0.022) | 0.551 (0.024) | 0.611 (0.013) | 0.628 (0.013) | 0.599 (0.016) | 0.643 (0.014) | 0.645 (0.013) |
| STGCN | F | 0.583(0.022) | 0.616 (0.027) | 0.544 (0.026) | 0.580 (0.022) | 0.612 (0.015) | 0.548 (0.037) | 0.614 (0.018) | 0.591 (0.008) |
| 3D+2DCNN | SF | 0.632 (0.028) | 0.667 (0.022) | 0.593 (0.043) | 0.630 (0.029) |
| 0.617(0.041) | 0.656 (0.026) | 0.655 (0.013) |
| AMNI (Ours) | SF |
|
|
|
| 0.640 (0.031) |
|
|
|
S, sMRI; F, fMRI; SF, sMRI+fMRI.
Figure 2ROC curves and related AUC values achieved by different methods in MDD vs. HC classification. (A) AMNI vs. six conventional methods. (B) AMNI vs. three deep learning methods. (C) AMNI vs. its three variants.
Results of statistical significance analysis between the proposed AMNI and eight competing methods.
|
|
|
|
|---|---|---|
| AMNI vs. PCA+SVM-s | 3.40 ×10−4 | Yes |
| AMNI vs. EC+SVM | 3.93 ×10−4 | Yes |
| AMNI vs. CC+SVM | 3.16 ×10−4 | Yes |
| AMNI vs. DC+SVM | 2.43 ×10−4 | Yes |
| AMNI vs. PCA+SVM-f | 1.01 ×10−5 | Yes |
| AMNI vs. PP+SVM | 2.71 ×10−5 | Yes |
| AMNI vs. 2DCNN | 9.48 ×10−3 | Yes |
| AMNI vs. 3DCNN+2DCNN | 1.07 ×10−3 | Yes |
Figure 3Visualization of feature distributions from the PP+SVM and the proposed AMNI models via t-SNE (Van der Maaten and Hinton, 2008). The horizontal and vertical axes denote two dimensions after feature mapping. (A) Distribution of features derived from PP+SVM. (B) Distribution of features derived from AMNI.
Figure 4Performance of our AMNI and its three variants in the task of MDD vs. HC classification, with best results shown in bold.
Figure 5Accuracy achieved by the proposed AMNI method with different values of λ in Equation (11) in the task of MDD vs. HC classification.
Figure 6Results of the proposed AMNI based on three different graph construction methods (e.g.fully-connected graph, threshold graph, and KNN graph) in the task of MDD vs. HC classification, with best results shown in bold.
Classification results of our AMNI in MDD vs. HC classification with different network depth, with best results shown in bold.
|
|
|
|
|
|
|
|
|
|
|---|---|---|---|---|---|---|---|---|
| AMNI-G1 | 0.634 (0.014) | 0.677 (0.065) | 0.587 (0.054) | 0.632 (0.019) |
| 0.598 (0.077) |
| 0.627 (0.032) |
| AMNI-G2 |
|
|
|
| 0.640 (0.031) |
| 0.663 (0.021) |
|
| AMNI-G3 | 0.595 (0.008) | 0.629 (0.034) | 0.559 (0.041) | 0.594 (0.010) | 0.600 (0.010) | 0.590 (0.019) | 0.614 (0.016) | 0.605 (0.009) |
| AMNI-G4 | 0.587 (0.011) | 0.618 (0.023) | 0.554 (0.025) | 0.586 (0.011) | 0.610 (0.042) | 0.561 (0.036) | 0.613 (0.022) | 0.599 (0.022) |
| AMNI-C3 | 0.628 (0.005) | 0.692 (0.045) | 0.551 (0.057) | 0.622 (0.007) | 0.647 (0.014) | 0.603 (0.012) | 0.668 (0.013) | 0.622 (0.007) |
| AMNI-C4 | 0.650 (0.016) | 0.694 (0.068) |
| 0.651 (0.016) | 0.640 (0.031) |
| 0.663 (0.021) |
|
| AMNI-C5 |
|
| 0.565 (0.049) |
|
| 0.657 (0.040) |
| 0.653 (0.023) |
| AMNI-C6 | 0.642 (0.014) | 0.701 (0.046) | 0.580 (0.041) | 0.641 (0.017) | 0.651 (0.029) | 0.634 (0.053) | 0.673 (0.008) | 0.628 (0.018) |
Note that AMNI-Gn contains n graph convolutional layers in the GCN module of AMNI, and AMNI-Cn contains n convolutional layers in the CNN module of AMNI.
Classification results of our AMNI in MDD vs. HC classification with different network width, with best results shown in bold.
|
|
|
|
|
|
|
|
|
|
|---|---|---|---|---|---|---|---|---|
| AMNI-g40 | 0.620 (0.035) | 0.626 (0.089) | 0.614 (0.097) | 0.620 (0.035) | 0.652 (0.039) | 0.593 (0.040) | 0.635 (0.049) | 0.650 (0.036) |
| AMNI-g64 |
| 0.694 (0.068) | 0.609 (0.056) |
| 0.640 (0.031) |
| 0.663 (0.021) | 0.665 (0.017) |
| AMNI-g88 | 0.626 (0.015) |
| 0.542 (0.052) | 0.620 (0.015) | 0.644 (0.016) | 0.604 (0.023) |
|
|
| AMNI-g112 | 0.631 (0.016) | 0.647 (0.053) |
| 0.629 (0.015) |
| 0.602 (0.024) | 0.651 (0.029) | 0.637 (0.037) |
| AMNI-c1 | 0.598 (0.017) | 0.643 (0.046) | 0.535 (0.081) | 0.589 (0.028) |
| 0.535 (0.073) | 0.642 (0.026) | 0.607 (0.0148) |
| AMNI-c2 | 0.630 (0.020) | 0.693 (0.080) | 0.575 (0.096) | 0.634 (0.016) | 0.593 (0.033) |
| 0.635 (0.023) | 0.667 (0.004) |
| AMNI-c3 |
|
| 0.609 (0.056) |
| 0.640 (0.031) | 0.667 (0.055) |
| 0.665 (0.017) |
| AMNI-c4 | 0.641 (0.015) | 0.654 (0.051) |
| 0.642 (0.015) | 0.628 (0.030) | 0.658 (0.044) | 0.638 (0.028) |
|
Note that AMNI-gn contains n neurons in the graph convolutional layers of the GCN module. And the filter sequences in CNN module of AMNI-c1, AMNI-c2, AMNI-c3 and AMNI-c4 are [4, 8, 16, 32], [8, 16, 32, 64], [16, 32, 64, 128], and [32, 64, 128, 256], respectively.
Figure 7Illustration of the adaptive multimodal neuroimage integration (AMNI) framework based on a decision-level fusion strategy.
Figure 8Experimental results of late fusion method and our AMNI method in MDD vs. HC classification. Note that AMNI_lf1, AMNI_lf2, and AMNI_lf3 denote that the weight ratio between fMRI and sMRI branch is , , and , respectively.