| Literature DB >> 35865746 |
Wenjing Jiang1,2, Shuaiqi Liu1,2, Hong Zhang1,2, Xiuming Sun3, Shui-Hua Wang4, Jie Zhao1,2, Jingwen Yan5.
Abstract
As a neurodevelopmental disorder, autism spectrum disorder (ASD) severely affects the living conditions of patients and their families. Early diagnosis of ASD can enable the disease to be effectively intervened in the early stage of development. In this paper, we present an ASD classification network defined as CNNG by combining of convolutional neural network (CNN) and gate recurrent unit (GRU). First, CNNG extracts the 3D spatial features of functional magnetic resonance imaging (fMRI) data by using the convolutional layer of the 3D CNN. Second, CNNG extracts the temporal features by using the GRU and finally classifies them by using the Sigmoid function. The performance of CNNG was validated on the international public data-autism brain imaging data exchange (ABIDE) dataset. According to the experiments, CNNG can be highly effective in extracting the spatio-temporal features of fMRI and achieving a classification accuracy of 72.46%.Entities:
Keywords: ABIDE; ASD classification; CNN; CNNG; spatio-temporal features
Year: 2022 PMID: 35865746 PMCID: PMC9294312 DOI: 10.3389/fnagi.2022.948704
Source DB: PubMed Journal: Front Aging Neurosci ISSN: 1663-4365 Impact factor: 5.702
FIGURE 1The structure of the CNNG model.
FIGURE 2The structure of single-frame convolutional neural network (CNN).
FIGURE 3The structure of gate recurrent unit (GRU).
Structure of single-frame 3D convolutional neural network (CNN) model.
| Layer | Type | Output size | Filter | Core size |
| 1 | Conv3D | 28× 28× 28 | 8 | 3× 3× 3 |
| 2 | Conv3D | 28× 28× 28 | 8 | 3× 3× 3 |
| 3 | Conv3D | 28× 28× 28 | 8 | 3× 3× 3 |
| 4 | MaxPooling3D | 14× 14× 14 | 8 | 2× 2× 2 |
| 5 | Conv3D | 14× 14× 14 | 16 | 3× 3× 3 |
| 6 | MaxPooling3D | 7× 7× 7 | 16 | 2× 2× 2 |
| 7 | Conv3D | 7× 7× 7 | 32 | 3× 3× 3 |
| 8 | MaxPooling3D | 4× 4× 4 | 32 | 2× 2× 2 |
| 9 | Conv3D | 4× 4× 4 | 64 | 3× 3× 3 |
| 10 | MaxPooling3D | 2× 2× 2 | 64 | 2× 2× 2 |
Names of the 17 sites and their sample sizes.
| Serial number | Sites | ASD | TC | Total subjects |
| 1 | Caltech | 19 | 19 | 38 |
| 2 | CMU | 14 | 13 | 27 |
| 3 | KKI | 22 | 33 | 55 |
| 4 | Leuven | 29 | 35 | 64 |
| 5 | MaxMun | 24 | 33 | 57 |
| 6 | NYU | 79 | 105 | 184 |
| 7 | OHSU | 13 | 15 | 28 |
| 8 | Olin | 20 | 16 | 36 |
| 9 | Pitt | 30 | 27 | 57 |
| 10 | SBL | 15 | 15 | 30 |
| 11 | SDSU | 14 | 22 | 36 |
| 12 | Stanford | 20 | 20 | 40 |
| 13 | Trinity | 24 | 25 | 49 |
| 14 | UCLA | 62 | 47 | 109 |
| 15 | UM | 68 | 77 | 145 |
| 16 | USM | 58 | 43 | 101 |
| 17 | Yale | 28 | 28 | 56 |
Classification performance of different convolutional kernel sizes.
| Kernel sizes (number of layers) | Accuracy | Sensitivity | Specificity |
| 7 x 7x 7 (1) | 70.53% | 64.15% | 77.23% |
| 5 x 5 x 5 (1) | 67.63% | 62.37% | 72.64% |
| 3 x 3 x 3 (3) |
|
| |
| 3 x 3 x 3 (2) | 70.04% | 65.09% | 75.25% |
| 3 x 3 x 3 (1) | 62.32% | 59.80% | 66.04% |
The bold values in this table represent the optimal values of accuracy, specificity and sensitivity.
Performance of different temporal feature extraction modules.
| Time feature extraction module | Accuracy | Sensitivity | Specificity |
| LSTM | 68.60% | 56.60% |
|
| GRU |
|
| 79.25% |
The bold values in this table represent the optimal values of accuracy, specificity and sensitivity.
Classification performance of different time interceptions.
| Time dimension | Accuracy | Sensitivity | Specificity |
| 8 | 63.74% | 59.33% | 67.32% |
| 16 | 69.08% | 63.46% | 72.12% |
| 32 |
|
|
|
| 48 | 69.57% | 62.37% | 76.42% |
The bold values in this table represent the optimal values of accuracy, specificity and sensitivity.
Classification performance with different numbers of gate recurrent unit (GRU) units.
| Number of GRU units | Accuracy | Sensitivity | Specificity |
| 16 | 71.50% | 64.15% | 77.28% |
| 32 |
|
|
|
| 48 | 71.01% | 62.38% | 74.26% |
The bold values in this table represent the optimal values of accuracy, specificity and sensitivity.
Classification performance of traditional machine learning algorithms and CNNG.
| Classification | Accuracy | Sensitivity | Specificity |
| RBF-SVC | 66.70% | 62.35% | 72.35% |
| RCE-SVM | 67.30% | 64.5% | 70.10% |
| HFR | 71.10% | 67.00% | 75.00% |
| C-SVC | 67.00% | 53.20% | 78.30% |
| FCR | 71.98% | 70.89% | 71.53% |
| CNNG |
|
|
|
The bold values in this table represent the optimal values of accuracy, specificity and sensitivity.
Classification performance of deep learning algorithms and CNNG.
| Classification | Accuracy | Sensitivity | Specificity |
| CNN-MLP | 70.22% | 62.35% | 72.35% |
| SVC | 71.10% | 67.00% | 75.00% |
| DiagNet | 70.30% | 68.03% | 72.20% |
| HI-GCN | 67.20% | 65.90% | 68.40% |
| GAT | 68.02% | 74.06% | 62.26% |
| CNNG |
|
|
|
The bold values in this table represent the optimal values of accuracy, specificity and sensitivity.
FIGURE 4Receiver operating characteristic (ROC) curve of CNNG model.