| Literature DB >> 35527734 |
Changxing Qu1,2, Yinxi Zou1,3, Yingqiao Ma1, Qin Chen4, Jiawei Luo5, Huiyong Fan6, Zhiyun Jia1, Qiyong Gong1,7, Taolin Chen1,8,9.
Abstract
Alzheimer's disease (AD) is the most common form of dementia. Currently, only symptomatic management is available, and early diagnosis and intervention are crucial for AD treatment. As a recent deep learning strategy, generative adversarial networks (GANs) are expected to benefit AD diagnosis, but their performance remains to be verified. This study provided a systematic review on the application of the GAN-based deep learning method in the diagnosis of AD and conducted a meta-analysis to evaluate its diagnostic performance. A search of the following electronic databases was performed by two researchers independently in August 2021: MEDLINE (PubMed), Cochrane Library, EMBASE, and Web of Science. The Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) tool was applied to assess the quality of the included studies. The accuracy of the model applied in the diagnosis of AD was determined by calculating odds ratios (ORs) with 95% confidence intervals (CIs). A bivariate random-effects model was used to calculate the pooled sensitivity and specificity with their 95% CIs. Fourteen studies were included, 11 of which were included in the meta-analysis. The overall quality of the included studies was high according to the QUADAS-2 assessment. For the AD vs. cognitively normal (CN) classification, the GAN-based deep learning method exhibited better performance than the non-GAN method, with significantly higher accuracy (OR 1.425, 95% CI: 1.150-1.766, P = 0.001), pooled sensitivity (0.88 vs. 0.83), pooled specificity (0.93 vs. 0.89), and area under the curve (AUC) of the summary receiver operating characteristic curve (SROC) (0.96 vs. 0.93). For the progressing MCI (pMCI) vs. stable MCI (sMCI) classification, the GAN method exhibited no significant increase in the accuracy (OR 1.149, 95% CI: 0.878-1.505, P = 0.310) or the pooled sensitivity (0.66 vs. 0.66). The pooled specificity and AUC of the SROC in the GAN group were slightly higher than those in the non-GAN group (0.81 vs. 0.78 and 0.81 vs. 0.80, respectively). The present results suggested that the GAN-based deep learning method performed well in the task of AD vs. CN classification. However, the diagnostic performance of GAN in the task of pMCI vs. sMCI classification needs to be improved. Systematic Review Registration: [PROSPERO], Identifier: [CRD42021275294].Entities:
Keywords: Alzheimer’s disease; diagnosis; generative adversarial networks (GANs); meta-analysis; mild cognitive impairment (MCI); psychoradiology; systematic review
Year: 2022 PMID: 35527734 PMCID: PMC9068970 DOI: 10.3389/fnagi.2022.841696
Source DB: PubMed Journal: Front Aging Neurosci ISSN: 1663-4365 Impact factor: 5.750
FIGURE 1Flowchart of the study selection process (PRISMA flow chart).
Characteristics of the included studies.
| Authors | Year | Country | Data | Participants | Structure of the model | Type of GAN | Function of GAN | Task of classification | Performance | |||||||||||
| Source | Modality | AD | MCI | pMCI | sMCI | CN | Accuracy | Sensitivity | Specificity | F1-score | Recall | Precision | AUC | |||||||
|
| 2018 | China | ADNI | MRI+PET | 358 | − | 205 | 465 | 429 | Two-stage: GAN+ LM3IL | cycleGAN | Modality conversion | AD vs. CN; pMCI vs. sMCI | 0.92; 0.79 | 0.90; 0.55 | 0.94; 0.83 | 0.91; 0.41 | − | − | 0.96; 0.76 |
|
| 2020 | United States | ADNI | PET | 98 | − | − | − | 105 | Two-stage: GAN+CNN | DCGAN | Data augmentation | AD vs. CN | 0.71 | − | − | − | − | − | − |
|
| 2020 | Korea | ADNI; clinical | PET | 139 | − | − | − | 347 | Two-stage: GAN+SVM | BEGAN | Feature extraction | AD vs. CN | 0.94 | 0.92 | 0.97 | − | − | − | 0.98 |
|
| 2019 | Switzerland | ADNI; clinical | MRI | − | − | 89 | 116 | − | Two-stage: GAN+CNN | WGAN | Aging simulation | pMCI vs. sMCI | 0.73 | − | − | 0.71 | 0.75 | 0.68 | − |
|
| 2018 | United Kingdom | ADNI | MRI+PET | − | − | 58 | 50 | − | Two-stage: GAN+ Resnet | cGAN | Modality conversion | pMCI vs. sMCI | 0.82 | − | − | − | − | − | 0.81 |
|
| 2021 | Korea | ADNI | PET | 25 | − | − | − | 148 | GAN only | GAN | Anomaly detection | AD vs. CN | − | − | − | − | − | − | 0.75 |
|
| 2021 | China | ADNI | MRI+PET | 352 | − | 234 | 342 | 427 | Two-stage: GAN+ DCN | TPA-GAN | Modality conversion | AD vs. CN; pMCI vs. sMCI | 0.93; 0.75 | 0.92; 0.71 | 0.94; 0.78 | 0.92; 0.70 | − | − | 0.96; 0.78 |
|
| 2021 | Japan | OASIS | MRI | 96 | 152 | − | − | 576 | GAN only | SAGAN | Anomaly detection | AD vs. CN | − | − | − | − | − | − | 0.89 |
|
| 2021 | China | ADNI | MRI | 187 | − | 138 | 181 | 229 | Ensemble learning: discriminator of GAN+VGG16+ ResNet50 | DCGAN | Transfer learning | AD vs. CN; pMCI vs. sMCI | 0.90; 0.63 | 0.94; 0.58 | 0.84; 0.64 | − | − | − | 0.90; 0.62 |
|
| 2021 | China | ADNI | MRI+PET | 362 | − | 183 | 233 | 308 | Two-stage: GAN+CNN | revGAN | Modality conversion | AD vs. CN; pMCI vs. sMCI | 0.89; 0.71 | 0.90; 0.74 | 0.88; 0.68 | − | − | − | 0.88; 0.74 |
|
| 2021 | Pakistan | ADNI | PET | 30 | − | − | − | 42 | Two-stage: GAN+VGG16 | DCGAN | Data augmentation | AD vs. CN | 0.83 | − | − | 0.88 | 0.86 | 0.91 | − |
|
| 2021 | United States | ADNI; AIBL | MRI | 411 | − | − | − | 678 | Two-stage: GAN+FCN | GAN | Quality improvement | AD vs. CN | 0.82 | 0.74 | 0.89 | 0.79 | − | − | − |
|
| 2020 | Korea | ADNI | MRI+PET | 162 | 675 | − | − | 428 | GAN only | cGAN | Modality conversion; classification | AD vs. CN | 0.85 | − | − | − | 0.84 | 0.84 | − |
|
| 2021 | China | ADNI; OASIS | MRI+other information | 151 | 341 | − | − | 113 | Two-stage: GAN+DenseNet | mi-GAN | Aging simulation | pMCI vs. sMCI | 0.78 | − | − | 0.74 | 0.71 | 0.78 | − |
FIGURE 2Characteristics of the included studies: (A) Publication year, (B) data source, (C) modality of data, (D) classification task, (E) type of GAN, and (F) quality assessment.
FIGURE 3The structure of the GAN and some improvements reported in the included studies.
The results of meta-analyses of the diagnosis of AD.
| Task | Method | OR | SEN | SPE | AUC | Spearman correlation coefficient |
| AD vs. NC | w/ | 1.425 | 0.88 (0.82–0.93) | 0.93 (0.90–0.95) | 0.96 (0.94–0.97) | −0.029 |
| w/o | 0.83 (0.76–0.88) | 0.89 (0.86–0.92) | 0.93 (0.90–0.95) | 0.257 | ||
| pMCI vs. sMCI | w/GAN | 1.149 (0.878–1.505) | 0.66 (0.57, 0.75) | 0.81 (0.76, 0.85) | 0.81 (0.72–0.89) | 1.000 |
| w/o GAN | 0.66 (0.57, 0.75) | 0.78 (0.74, 0.82) | 0.80 (0.74–0.87) | 1.000 |
*Statistically significant, p≤ 0.05.
FIGURE 4Forest plot of the accuracy in the task of AD vs. CN classification.
FIGURE 5Forest plots showing the pooled sensitivity and specificity in the task of AD vs. CN classification. (A) The pooled sensitivity and specificity in the GAN group; (B) the pooled sensitivity and specificity in the non-GAN group.
FIGURE 6SROC curve for the task of AD vs. CN classification: (A) SROC curve for the GAN group and (B) SROC curve for the non-GAN group.
FIGURE 7Forest plot of the accuracy in the pMCI vs. sMCI classification task.
FIGURE 8Forest plots showing the pooled sensitivity and specificity in the task of pMCI vs. sMCI classification. (A) The pooled sensitivity in the GAN group and (B) the non-GAN group. (C) The pooled specificity in the GAN group and (D) the non-GAN group.
FIGURE 9SROC curves for the task of pMCI vs. sMCI classification: (A) SROC curve for the GAN group and (B) SROC curve for the non-GAN group.
FIGURE 10Schematic diagram of the function of image processing using GAN.