| Literature DB >> 35185454 |
Yueying Chen1,2, Aiping Liu1,2, Xueyang Fu1,2, Jie Wen3, Xun Chen1,2.
Abstract
Autism Spectrum Disorder (ASD) is one common developmental disorder with great variations in symptoms and severity, making the diagnosis of ASD a challenging task. Existing deep learning models using brain connectivity features to classify ASD still suffer from degraded performance for multi-center data due to limited feature representation ability and insufficient interpretability. Given that Graph Convolutional Network (GCN) has demonstrated superiority in learning discriminative representations of brain connectivity networks, in this paper, we propose an invertible dynamic GCN model to identify ASD and investigate the alterations of connectivity patterns associated with the disease. In order to select explainable features from the model, invertible blocks are introduced in the whole network, and we are able to reconstruct the input dynamic features from the network's output. A pre-screening of connectivity features is adopted to reduce the redundancy of the input information, and a fully-connected layer is added to perform classification. The experimental results on 867 subjects show that our proposed method achieves superior disease classification performance. It provides an interpretable deep learning model for brain connectivity analysis and is of great potential in studying brain-related disorders.Entities:
Keywords: autism spectrum disorder; brain connectivity networks; disease classification; fMRI; graph convolutional networks; invertible networks
Year: 2022 PMID: 35185454 PMCID: PMC8854990 DOI: 10.3389/fnins.2021.828512
Source DB: PubMed Journal: Front Neurosci ISSN: 1662-453X Impact factor: 4.677
Figure 1Structure of the invertible block.
Figure 2The proposed ID-GCN architecture. The selected features are trained in three invertible blocks. A fully connected (FC) layer is finally used to obtain the output scores for ASD classification. The whole network is reversible before the FC layer, meaning that we can reconstruct the informative disease-related brain connectivity patterns by selecting important output features of the network.
Figure 3Overview of the proposed framework. The brain connectivity features inferred from the fMRI time series and brain parcellation are fed to the model. After training using the ID-GCN model, we obtain the predictions for ASD classification, and important brain connectivity features are selected accordingly.
Phenotypical information summary of ABIDE data.
|
|
|
|
|
|
|
|---|---|---|---|---|---|
|
|
| ||||
| PITT | 30 | 27 | 49/8 | 57 | 18.9 ± 6.8 |
| TRINITY | 24 | 25 | 49/0 | 49 | 17.2 ± 3.6 |
| UM_1 | 55 | 55 | 84/26 | 110 | 13.4 ± 2.9 |
| UM_2 | 13 | 22 | 33/2 | 35 | 16 ± 3.3 |
| USM | 58 | 43 | 101/0 | 101 | 22.1 ± 7.6 |
| YALE | 28 | 28 | 40/16 | 56 | 12.7 ± 2.9 |
| LEUVEN_1 | 14 | 15 | 29/0 | 29 | 22.6 ± 3.5 |
| LEUVEN_2 | 15 | 20 | 27/8 | 35 | 14.2 ± 1.4 |
| KKI | 22 | 33 | 42/13 | 55 | 10.1 ± 1.3 |
| NYU | 79 | 105 | 147/37 | 184 | 15.3 ± 6.6 |
| UCLA_1 | 41 | 32 | 63/10 | 73 | 13.2 ± 2.4 |
| UCLA_2 | 13 | 13 | 24/2 | 26 | 12.5 ± 1.5 |
| MAX_MUN | 24 | 33 | 50/7 | 57 | 26.2 ± 11.9 |
| TOTAL | 416 | 451 | 738/129 | 867 | 16.4 ± 7.1 |
Figure 4Comparison with traditional and GCN models including siamese GCN (Ktena et al., 2018), Random Forest, SVM, and GCN.
Comparisons of different methods.
|
|
|
|
|
|
|
|---|---|---|---|---|---|
| SVM | 66.0 ± 3.7% | 65.9 ± 3.7% | 65.9 ± 3.9% | 65.9 ± 3.7% | 65.9 ± 3.9% |
| Random forest | 65.3 ± 2.4% | 65.1 ± 2.4% | 65.7 ± 2.4% | 65.1 ± 2.4% | 65.0 ± 2.5% |
| GCN | 73.2 ± 2.7% | 71.7 ± 6.5% | 73.4 ± 3.2% | ||
| Siamises GCN | 59.4 ± 1.7% | 58.6 ± 1.9% | 60.7 ± 1.2% | 62.3 ± 12.0% | 61.3 ± 6.9% |
| ID-GCN(our model) | 77.5 ± 4.9% | 75 ± 5.9% |
Comparison with other SOTA methods.
|
|
|
|
|---|---|---|
| DNN (Li et al., | 95 | 85.3% |
| Combined MCNNEs (Aghdam et al., | 459 | 70.45% |
| CNN-EW (Xing et al., | 1096 | 66.88% |
| ASD-DiagNet (Eslami et al., | 1035 | 70.1% |
| cGCN (Wang et al., | 1057 | 70.7% |
| 3D CNN (Thomas et al., | 1162 | 64% |
| 95 |
| |
| 459 |
| |
| ID-GCN(our model) | 867 |
|
| 1066 |
|
Model performance in each single center.
|
|
|
|
|---|---|---|
| PITT | 57 | 71.7 ± 6.7% |
| TRINITY | 49 | 72.0 ± 11.7% |
| UM_1 | 110 | 75.5 ± 3.6% |
| UM_2 | 35 | 82.9 ± 10.7% |
| USM | 101 | 84.8 ± 7.0% |
| YALE | 56 | 80.0 ± 6.7% |
| LEUVEN_1 | 29 | 73.3 ± 8.3% |
| LEUVEN_2 | 35 | 74.3 ± 10.7% |
| KKI | 55 | 74.5 ± 8.9% |
| NYU | 184 | 76.2 ± 5.2% |
| UCLA_1 | 73 | 74.7 ± 8.8% |
| UCLA_2 | 26 | 83.3 ± 18.2% |
| MAX_MUN | 57 | 66.6 ± 11.9% |
| TOTAL | 867 | 76.3 ± 3.7% |
Ablation study on the effects of different components.
|
|
|
|---|---|
| GCN | 73.2% |
| GCN adding spatial information | 74.5% |
| ID-GCN with PCA | 74.2% |
| ID-GCN without dynamic features | 76.1% |
| ID-GCN(our model) |
|
Figure 5Selected key connectivity features for ASD classification.
Important connectivity edges selected by feature reconstruction.
|
|
|
|---|---|
| Right Pallidum | Right Inferior Frontal Gyrus |
| Left Frontal Orbital Cortex | Left Central Opercular Cortex |
| Left Temporal Fusiform Cortex (posterior division) | Left Heschl's Gyrus (includes H1 and H2) |
| Left Supramarginal Gyrus (anterior division) | Right Temporal Occipital Fusiform Cortex |
| Left Supramarginal Gyrus (posterior division) | Left Frontal Orbital Cortex |
| Right Inferior Temporal Gyrus (anterior division) | Left Supramarginal Gyrus (anterior division) |
| Right Inferior Temporal Gyrus (anterior division) | Left Lateral Occipital Cortex (inferior division) |
Figure 6Selected key ROIs for ASD classification, including Right Pallidum (red), Right Inferior Frontal Gyrus (triangle part) (orange), Right Inferior Temporal Gyrus (anterior division) (yellow), Left Frontal Orbital Cortex (green), Left Temporal Fusiform Cortex (posterior division) (cyan), and Right Temporal Occipital Fusiform Cortex (blue).
The classification accuracy with different k.
|
|
|
|
|
|
|
|
|
|
|
|---|---|---|---|---|---|---|---|---|---|
|
| 73.7% |
| 75.1% | 76.0% | 75.0% | 75.0% | 75.8% | 74.7% | 75.3% |
The classification accuracy with different M.
|
|
|
|
|
|
|
|
|
|---|---|---|---|---|---|---|---|
|
| 72.1% | 73.6% |
| 76.0% | 75.7% | 74.6% | 73.9% |