| Literature DB >> 33286662 |
Tao Zhang1,2, Cunbo Li2, Peiyang Li3, Yueheng Peng2, Xiaodong Kang4, Chenyang Jiang2, Fali Li2, Xuyang Zhu2, Dezhong Yao2, Bharat Biswal2, Peng Xu2.
Abstract
The accurate identification of an attention deficit hyperactivity disorder (ADHD) subject has remained a challenge for both neuroscience research and clinical diagnosis. Unfortunately, the traditional methods concerning the classification model and feature extraction usually depend on the single-channel model and static measurements (i.e., functional connectivity, FC) in the small, homogenous single-site dataset, which is limited and may cause the loss of intrinsic information in functional MRI (fMRI). In this study, we proposed a new two-stage network structure by combing a separated channel convolutional neural network (SC-CNN) with an attention-based network (SC-CNN-attention) to discriminate ADHD and healthy controls on a large-scale multi-site database (5 sites and n = 1019). To utilize both intrinsic temporal feature and the interactions of temporal dependent in whole-brain resting-state fMRI, in the first stage of our proposed network structure, a SC- CNN is used to learn the temporal feature of each brain region, and an attention network in the second stage is adopted to capture temporal dependent features among regions and extract fusion features. Using a "leave-one-site-out" cross-validation framework, our proposed method obtained a mean classification accuracy of 68.6% on five different sites, which is higher than those reported in previous studies. The classification results demonstrate that our proposed network is robust to data variants and is also replicated across sites. The combination of the SC-CNN with the attention network is powerful to capture the intrinsic fMRI information to discriminate ADHD across multi-site resting-state fMRI data.Entities:
Keywords: ADHD; CNN; attention; deep learning
Year: 2020 PMID: 33286662 PMCID: PMC7517519 DOI: 10.3390/e22080893
Source DB: PubMed Journal: Entropy (Basel) ISSN: 1099-4300 Impact factor: 2.524
The demographic information for different sites.
| Site | ADHD | HC | Volumes | ||||
|---|---|---|---|---|---|---|---|
| Age | Count | Total | Age | Count | Total | ||
| KKI | 8–13 | 10/15(F/M) | 35 | 8–13 | 28/41(F/M) | 69 | 152/119 |
| NI | 11–21 | 5/31(F/M) | 36 | 12–26 | 25/12(F/M) | 37 | 257 |
| NYU | 7–18 | 34/117(F/M) | 151 | 7–18 | 55/56(F/M) | 111 | 176/172 |
| OHSU | 7–12 | 13/30(F/M) | 43 | 7–12 | 40/30(F/M) | 70 | 78/50/73 |
| Peking | 8–17 | 10/92(F/M) | 102 | 8–15 | 59/84(F/M) | 143 | 236/231 |
| Total | - | - | 422 | - | - | 597 | - |
Figure 1Architecture of the proposed SC-CNN-XX model for diagnosing attention deficit hyperactivity disorder (ADHD).
Figure 2The leave-one-site-out cross-validation scheme.
Figure 3The accuracies and areas under the curves (AUCs) for each test site based on “SC-CNN+XX” models.
Comparison of our proposed models with the average results of ADHD-200 competition teams, FCNet, 3D-CNN, and DeepFMRI in multi-site data.
| NYU | Peking | OHSU | KKI | NI | Overall Accuracy | |
|---|---|---|---|---|---|---|
| Previous methods | ||||||
| ADHD-200 competition [ | 35.2% | 51.1% | 65.4% | 61.9% | 57.0% | 54.1% |
| FCNet [ | 58.5% | 62.7% | - | - | 60.0% | 60.4% |
| 3D-CNN [ | - | 62.9% | - | 72.8% | - | 67.8% |
| DeepFMRI [ | 73.1% | 62.7% | - | - | 67.9% | 67.9% |
| Our models | ||||||
| SC-CNN-Dense | 55.4% | 60.3% | 59.8% | 69.2% | 63.0% | 61.3% |
| SC-CNN | 52.4% | 60.2% | 61.6% | 68.1% | 64.4% | 61.5% |
| SC-CNN-LSTM | 56.3% | 61.5% | 61.7% | 75.3% | 63.0% | 63.6% |
| SC-CNN-Attention |
|
|
|
|
|
|