| Literature DB >> 35310099 |
Jade Xiaoqing Wang1, Yimei Li1, Xintong Li2, Zhao-Hua Lu1.
Abstract
The application of deep learning techniques to the detection and automated classification of Alzheimer's disease (AD) has recently gained considerable attention. The rapid progress in neuroimaging and sequencing techniques has enabled the generation of large-scale imaging genetic data for AD research. In this study, we developed a deep learning approach, IGnet, for automated AD classification using both magnetic resonance imaging (MRI) data and genetic sequencing data. The proposed approach integrates computer vision (CV) and natural language processing (NLP) techniques, with a deep three-dimensional convolutional network (3D CNN) being used to handle the three-dimensional MRI input and a Transformer encoder being used to manage the genetic sequence input. The proposed approach has been applied to the Alzheimer's Disease Neuroimaging Initiative (ADNI) data set. Using baseline MRI scans and selected single-nucleotide polymorphisms on chromosome 19, it achieved a classification accuracy of 83.78% and an area under the receiver operating characteristic curve (AUC-ROC) of 0.924 with the test set. The results demonstrate the great potential of using multi-disciplinary AI approaches to integrate imaging genetic data for the automated classification of AD.Entities:
Keywords: Alzheimer's disease diagnosis; CNN; classification; deep learning; imaging genetics; transformer
Year: 2022 PMID: 35310099 PMCID: PMC8927016 DOI: 10.3389/fnins.2022.846638
Source DB: PubMed Journal: Front Neurosci ISSN: 1662-453X Impact factor: 4.677
Figure 1Architecture of IGnet. Upper left: The imaging channel with a 3D CNN. Upper right: The genetic channel with a Transformer encoder. Bottom: The MLP with two fully connected layers followed by softmax.
Figure 2(A) Upper panel, selected 2D slices of the input 3D brain images of 4 randomly picked AD patients; lower panel, the corresponding feature maps after 5 convolution layers (two selected filters). (B) Upper panel, selected 2D slices of the input 3D brain images of 4 randomly picked normal controls; lower panel, the corresponding feature maps after 5 convolution layers (two selected filters).
Figure 3The changes of training and validation losses overtime of IGnet.
Comparison of the performance of IGnet on the ADNI data set. AUCs are not available for the IG-vote, SVM-linear, and SVM-radial methods, therefore, are not presented.
|
|
|
|
|
|
| |
|---|---|---|---|---|---|---|
|
| ||||||
| IGnet | 83.78% | 87.50% | 77.78% | 0.824 | 0.924 | 0.935 |
| IG-avg | 81.08% | 82.35% | 77.78% | 0.800 | 0.886 | 0.893 |
| IG-vote | 70.27% | 88.89% | 44.44% | 0.593 | – | – |
|
| ||||||
| IGnet-I | 67.57% | 68.75% | 61.11% | 0.647 | 0.784 | 0.737 |
| SVM-linear | 64.86% | 50.00% | 38.46% | 0.435 | – | – |
| SVM-radial | 62.16% | 46.67% | 53.85% | 0.500 | – | – |
| FPCA | 62.16% | 52.94% | 60.00% | 0.563 | 0.676 | 0.655 |
|
| ||||||
| IGnet-G | 78.38% | 77.78% | 77.78% | 0.778 | 0.822 | 0.845 |
| RNN | 72.97% | 76.92% | 55.56% | 0.645 | 0.839 | 0.850 |
| Reg-ridge | 70.27% | 62.50% | 66.67% | 0.645 | 0.748 | 0.640 |
| Reg-lasso | 67.57% | 52.63% | 76.92% | 0.625 | 0.744 | 0.609 |
| IGnet-G | 67.57% | 71.43% | 55.56% | 0.625 | 0.827 | 0.823 |
| RNN | 70.27% | 57.14% | 61.54% | 0.593 | 0.837 | 0.749 |
| Reg-ridge | 62.16% | 52.94% | 60.00% | 0.563 | 0.733 | 0.634 |
| Reg-lasso | 64.86% | 50.00% | 84.62% | 0.629 | 0.750 | 0.668 |
Figure 4(A) ROC curves of IGnet, using both imaging and genetic inputs; IGnet-I, using imaging input alone; and IGnet-G, using genetic input alone. (B) PRC curves of IGnet, IGnet-I, and IGnet-G.