| Literature DB >> 35747447 |
Shui Liu1, Chen Jie1, Weimin Zheng1, Jingjing Cui1, Zhiqun Wang1.
Abstract
Alzheimer's disease (AD) is the most common form of dementia, causing progressive cognitive decline. Radiomic features obtained from structural magnetic resonance imaging (sMRI) have shown a great potential in predicting this disease. However, radiomic features based on the whole brain segmented regions have not been explored yet. In our study, we collected sMRI data that include 80 patients with AD and 80 healthy controls (HCs). For each patient, the T1 weighted image (T1WI) images were segmented into 106 subregions, and radiomic features were extracted from each subregion. Then, we analyzed the radiomic features of specific brain subregions that were most related to AD. Based on the selective radiomic features from specific brain subregions, we built an integrated model using the best machine learning algorithms, and the diagnostic accuracy was evaluated. The subregions most relevant to AD included the hippocampus, the inferior parietal lobe, the precuneus, and the lateral occipital gyrus. These subregions exhibited several important radiomic features that include shape, gray level size zone matrix (GLSZM), and gray level dependence matrix (GLDM), among others. Based on the comparison among different algorithms, we constructed the best model using the Logistic regression (LR) algorithm, which reached an accuracy of 0.962. Conclusively, we constructed an excellent model based on radiomic features from several specific AD-related subregions, which could give a potential biomarker for predicting AD.Entities:
Keywords: Alzheimer’s disease; machine learning; magnetic resonance imaging; radiomics; structural MRI (sMRI)
Year: 2022 PMID: 35747447 PMCID: PMC9211045 DOI: 10.3389/fnagi.2022.872530
Source DB: PubMed Journal: Front Aging Neurosci ISSN: 1663-4365 Impact factor: 5.702
FIGURE 1A research map. This study is mainly divided into four parts. The first part is data collection and preprocessing, the second part is whole-brain structure segmentation, the third part is radiomic analysis, and the fourth part is model construction and evaluation.
FIGURE 2The construction and evaluation of the machine learning model in this study. Because this study adopts a variety of normalization methods and machine learning models, their combination is shown in this figure.
Clinical characteristics of AD patients and HC.
| AD( | HC( | ||
| Age, median(min–max) | 65(46–88) | 64.5(48–83) | |
| Sex, male/female | 42/38 | 40/40 | |
| MMSE, median(min-max) | 15(0–25) | 28(12–30) | |
| CDR | 0.5–3 | 0 | / |
*Wilcoxon rank test;
**Fisher exact test. AD, Alzheimer’s disease; CDR, Clinical Dementia Rating; HC, healthy control; MMSE, Mini-Mental State Examination.
FIGURE 3The label of main brain regions in structural MRI (sMRI) after automatic segmentation. Specially, the symmetrical structure is divided into left and right and has different labels. Only the structure on the left is marked above.
FIGURE 4The combination of statistically significant brain regions and radiomic features in the two-step dimensionality reduction process. In the first step, Select K Best, more than 10 radiomic features, are screened out; only the top 10 are listed in this figure.
FIGURE 5The rad score in radiomics feature selection.
FIGURE 6The different proportions of radiomic characteristics in the process of predicting ad with radiomic characteristics in the low-order dimension.
Performance of different machine learning algorithms.
| Model | Normalization | auc_train | auc_test | f1score_train | f1score_test | recall_train | recall_test | precision_train | precision_test | sensitivity_train | sensitivity_test | specificity_train | specificity_test | accuracy_train | accuracy_test |
| AdaBoost | Quantitle transformer | 0.995 | 0.898 | 0.997 | 0.851 | 0.997 | 0.846 | 0.997 | 0.864 | 0.997 | 0.846 | 0.997 | 0.862 | 0.997 | 0.855 |
| BDT | BoxCox transformer | 0.995 | 0.951 | 0.995 | 0.909 | 0.990 | 0.910 | 1.000 | 0.913 | 0.990 | 0.910 | 1.000 | 0.912 | 0.995 | 0.911 |
| GP | Min-max scaler | 0.995 | 0.977 | 0.991 | 0.936 | 0.997 | 0.937 | 0.985 | 0.938 | 0.997 | 0.937 | 0.984 | 0.938 | 0.991 | 0.937 |
| GBDT | BoxCox transformer | 0.995 | 0.942 | 1.000 | 0.876 | 1.000 | 0.872 | 1.000 | 0.883 | 1.000 | 0.872 | 1.000 | 0.888 | 1.000 | 0.880 |
| KNN | Z-score scaler | 0.995 | 0.967 | 1.000 | 0.949 | 1.000 | 0.949 | 1.000 | 0.951 | 1.000 | 0.949 | 1.000 | 0.950 | 1.000 | 0.950 |
| LR | BoxCox transformer | 0.995 | 0.983 | 0.998 | 0.962 | 1.000 | 0.962 | 0.997 | 0.962 | 1.000 | 0.962 | 0.997 | 0.962 | 0.998 | 0.962 |
| PLS-DA | Quantitle transformer | 0.988 | 0.964 | 0.744 | 0.743 | 1.000 | 1.000 | 0.593 | 0.593 | 1.000 | 1.000 | 0.322 | 0.312 | 0.659 | 0.653 |
| QDA | BoxCox transformer | 0.974 | 0.969 | 0.934 | 0.935 | 0.924 | 0.923 | 0.945 | 0.949 | 0.924 | 0.923 | 0.947 | 0.950 | 0.936 | 0.937 |
| RF | BoxCox transformer | 0.989 | 0.966 | 0.958 | 0.936 | 0.949 | 0.936 | 0.968 | 0.938 | 0.949 | 0.936 | 0.969 | 0.938 | 0.959 | 0.937 |
| SGD | Max-abs scaler | 0.992 | 0.922 | 0.994 | 0.923 | 1.000 | 0.923 | 0.988 | 0.924 | 1.000 | 0.923 | 0.988 | 0.925 | 0.994 | 0.924 |
| SVM | Z-score scaler | 0.995 | 0.977 | 1.000 | 0.950 | 1.000 | 0.950 | 1.000 | 0.951 | 1.000 | 0.950 | 1.000 | 0.950 | 1.000 | 0.950 |
| XGBoost | L1 normalization | 0.995 | 0.957 | 1.000 | 0.903 | 1.000 | 0.898 | 1.000 | 0.911 | 1.000 | 0.898 | 1.000 | 0.912 | 1.000 | 0.906 |
AdaBoost, Adaptive Boosting; BDT, Bagging Decision Tree; GP, Gaussian Process; GBDT, Gradient Boosting Decision Tree; KNN, K-Nearest Neighbor algorithm; LR, Logistic regression; PLSDA, Partial Least Squares Discriminant Analysis; QDA, Quadratic Discriminant Analysis; RF, random forest; SGD, Stochastic Gradient Descent; SVM, support vector machine; XGBoost, Extreme Gradient Boosting.
FIGURE 7The performance of different machine learning models used in this study in the form of receiver operator curve (ROC).