| Literature DB >> 30455622 |
Weiming Lin1,2,3, Tong Tong3,4, Qinquan Gao1,3,4, Di Guo5, Xiaofeng Du5, Yonggui Yang6, Gang Guo6, Min Xiao2, Min Du1,7, Xiaobo Qu8.
Abstract
Mild cognitive impairment (MCI) is the prodromal stage of Alzheimer's disease (AD). Identifying MCI subjects who are at high risk of converting to AD is crucial for effective treatments. In this study, a deep learning approach based on convolutional neural networks (CNN), is designed to accurately predict MCI-to-AD conversion with magnetic resonance imaging (MRI) data. First, MRI images are prepared with age-correction and other processing. Second, local patches, which are assembled into 2.5 dimensions, are extracted from these images. Then, the patches from AD and normal controls (NC) are used to train a CNN to identify deep learning features of MCI subjects. After that, structural brain image features are mined with FreeSurfer to assist CNN. Finally, both types of features are fed into an extreme learning machine classifier to predict the AD conversion. The proposed approach is validated on the standardized MRI datasets from the Alzheimer's Disease Neuroimaging Initiative (ADNI) project. This approach achieves an accuracy of 79.9% and an area under the receiver operating characteristic curve (AUC) of 86.1% in leave-one-out cross validations. Compared with other state-of-the-art methods, the proposed one outperforms others with higher accuracy and AUC, while keeping a good balance between the sensitivity and specificity. Results demonstrate great potentials of the proposed CNN-based approach for the prediction of MCI-to-AD conversion with solely MRI data. Age correction and assisted structural brain image features can boost the prediction performance of CNN.Entities:
Keywords: Alzheimer’s disease; convolutional neural networks; deep learning; magnetic resonance imaging; mild cognitive impairment
Year: 2018 PMID: 30455622 PMCID: PMC6231297 DOI: 10.3389/fnins.2018.00777
Source DB: PubMed Journal: Front Neurosci ISSN: 1662-453X Impact factor: 4.677
FIGURE 1Framework of proposed approach. The dashed arrow indicates the CNN was trained with 2.5D patches of NC and AD subjects. The dashed box indicates Leave-one-out cross validation was performed by repeat LASSO and extreme learning machine 308 times, in each time one different MCI subject was leaved for test, and the other subjects with their labels were used to train LASSO and extreme learning machine.
The demographic information of the dataset used in this work.
| AD | NC | MCIc | MCInc | MCIun | |
|---|---|---|---|---|---|
| Subjects’ number | 188 | 229 | 169 | 139 | 93 |
| Age range | 55–91 | 60–90 | 55–88 | 55–88 | 55–89 |
| Males/Females | 99/89 | 119/110 | 102/67 | 96/43 | 60/33 |
FIGURE 2The demonstration of 2.5D patch extraction from hippocampus region. (A–C) 2D patches extracted from transverse (red box), coronal (green box), and sagittal (blue box) plane; (D) The 2.5D patch with three patches at their spatial locations, red dot is the center of 2.5D patch; (E) Three patches are combined into RGB patch as red (red box patch), green (green box patch), and blue (blue box patch) channels.
FIGURE 3(A) Four random chosen 2.5D patches of one subject (who is normal control, female and 76.3 years old), indicating that these patches contain different information of hippocampus; (B) The comparison of correspond 2.5D patches of four subjects from four groups, the different level of hippocampus atrophy can be found.
FIGURE 4The overall architecture of the CNN used in this work.
FIGURE 5The workflow of extracting CNN-based features. The CNN was trained with all AD/NC patches, and used to extract deep features from all 151 patches of MCI subject. The feature number of each patch is reduced to P (P = 29) from 1024 by PCA. Finally, Lasso selects L (L = 35) features from P × 151 features for each MCI subject.
The performance of the 2.5D CNN.
| Classifying: AD/NC Trained with: AD/NC | Classifying: MCIc/MCInc Trained with: MCIc/MCInc | Classifying: MCIc/MCInc Trained with: AD/NC | Different patch Sampling | |
|---|---|---|---|---|
| Accuracy | 88.79% | 68.68% | 73.04% | 72.75% |
| Standard deviation | 0.61% | 1.63% | 1.31% | 1.20% |
| Confidence interval | [0.8862, 0.8897] | [0.6821, 0.6914] | [0.7265, 0.7343] | [0.7252, 0.7299] |
The performance of different features used, and the performance without age correction.
| Method | Accuracy | Sensitivity | Specificity | AUC |
|---|---|---|---|---|
| Proposed method (both features) | ||||
| Only CNN-based features | 76.9% | 81.7% | 71.2% | 82.9% |
| Only FreeSurfer-based features | 76.9% | 82.2% | 70.5% | 82.8% |
| Without age correction | 75.3% | 79.9% | 69.8% | 82.6% |
FIGURE 6The ROC curves of classifying converters/non-converters when different features used or without age correction.
Comparison of extreme learning machine with other two classifiers.
| Method | Accuracy | Sensitivity | Specificity | AUC |
|---|---|---|---|---|
| SVM | 83.43% | 83.85% | ||
| Random forest | 75.0% | 82.84% | 65.47% | 81.99% |
| Extreme learning machine | 74.82% | |||
Comparison with others methods on the same dataset in 10-fold cross validation.
| Method | Accuracy | Sensitivity | Specificity | AUC |
|---|---|---|---|---|
| MRI biomarker in | 74.7% | 51.6% | 76.6% | |
| Global grading biomarker in | 78.9% | 76.0% | 81.3% | |
| Proposed method | 86.1% | 68.8% | ||
Comparison with others methods on the same dataset in leave-one-out cross validation.
| Method | Accuracy | Sensitivity | Specificity | AUC |
|---|---|---|---|---|
| MRI biomarker in | – | – | – | – |
| Global grading biomarker in | 78.8% | 76.2% | 81.2% | |
| Proposed method | 68% | |||
The 15 most informative FreeSurfer-based features for predicting MCI-to-AD conversion.
| Number | FreeSurfer-based feature |
|---|---|
| 1 | Cortical Thickness Average of Left FrontalPole |
| 2 | Volume (Cortical Parcellation) of Left Precentral |
| 3 | Volume (Cortical Parcellation) of Right Postcentral |
| 4 | Volume (WM Parcellation) of Left AccumbensArea |
| 5 | Cortical Thickness Average of Right CaudalMiddleFrontal |
| 6 | Cortical Thickness Average of Right FrontalPole |
| 7 | Volume (Cortical Parcellation) of Left Bankssts |
| 8 | Volume (Cortical Parcellation) of Left PosteriorCingulate |
| 9 | Volume (Cortical Parcellation) of Left Insula |
| 10 | Cortical Thickness Average of Left SuperiorTemporal |
| 11 | Cortical Thickness Standard Deviation of Left PosteriorCingulate |
| 12 | Volume (Cortical Parcellation) of Left Precuneus |
| 13 | Volume (WM Parcellation) of CorpusCallosumMidPosterior |
| 14 | Volume (Cortical Parcellation) of Left Lingual |
| 15 | Cortical Thickness Standard Deviation of Right Postcentral |
Results of previous deep learning based approaches for predicting MCI-to-AD conversion.
| Study | Number of MCIc/MCInc | Data | Conversion time | Accuracy | AUC |
|---|---|---|---|---|---|
| 99/56 | MRI + PET | 18 months | 57.4% | – | |
| 158/178 | PET | – | 72.47% | – | |
| 39/64 | MRI + PET | 24 months | 78% | 82% | |
| 76/128 | MRI + PET | – | 75.92% | 74.66% | |
| 99/56 | MRI + PET | 18 months | 78.88% | 80.1% | |
| 217/409 | MRI + PET | 36 months | – | ||
| 217/409 | MRI | 36 months | 75.44% | – | |
| 112/409 | PET | – | 82.51% | – | |
| This study | 164/100 | MRI | 36 months | 81.4% | |