| Literature DB >> 35309200 |
C Kavitha1, Vinodhini Mani1, S R Srividhya1, Osamah Ibrahim Khalaf2, Carlos Andrés Tavera Romero3.
Abstract
Alzheimer's disease (AD) is the leading cause of dementia in older adults. There is currently a lot of interest in applying machine learning to find out metabolic diseases like Alzheimer's and Diabetes that affect a large population of people around the world. Their incidence rates are increasing at an alarming rate every year. In Alzheimer's disease, the brain is affected by neurodegenerative changes. As our aging population increases, more and more individuals, their families, and healthcare will experience diseases that affect memory and functioning. These effects will be profound on the social, financial, and economic fronts. In its early stages, Alzheimer's disease is hard to predict. A treatment given at an early stage of AD is more effective, and it causes fewer minor damage than a treatment done at a later stage. Several techniques such as Decision Tree, Random Forest, Support Vector Machine, Gradient Boosting, and Voting classifiers have been employed to identify the best parameters for Alzheimer's disease prediction. Predictions of Alzheimer's disease are based on Open Access Series of Imaging Studies (OASIS) data, and performance is measured with parameters like Precision, Recall, Accuracy, and F1-score for ML models. The proposed classification scheme can be used by clinicians to make diagnoses of these diseases. It is highly beneficial to lower annual mortality rates of Alzheimer's disease in early diagnosis with these ML algorithms. The proposed work shows better results with the best validation average accuracy of 83% on the test data of AD. This test accuracy score is significantly higher in comparison with existing works.Entities:
Keywords: Alzheimer's disease (AD); feature selection; healthcare; machine learning; prediction
Mesh:
Year: 2022 PMID: 35309200 PMCID: PMC8927715 DOI: 10.3389/fpubh.2022.853294
Source DB: PubMed Journal: Front Public Health ISSN: 2296-2565
Summary of recent work related to AD.
|
|
|
|
|
|---|---|---|---|
| Khan et al. ( | Inage modality | Machine learning and deep learning models | Study on different ML and DL approaches, and different databases related to brain disease |
| Saratxaga et al. ( | OASIS dataset | Deep learning and image processing technique | 88% |
| Sudharsan and Thailambal ( | ADNI dataset | Machine learning models | 75% |
| Helaly et al. ( | ADNI dataset | Convolutional neural networks | 93% for multiclass AD stages |
| Shakila Basheer et al. ( | OASIS dataset | Deep neural networks | 92% |
| Martinez-Murcia et al. ( | ADNI dataset | Deep learning using convolutional autoencoders | 80% |
| Prajapati et al. ( | ADNI dataset | Deep neural network binary classifier | 85% |
Dataset description.
|
|
| |
|---|---|---|
| 1 | ID | Identification |
| 2 | M/F | Gender (M if Male, F if Female) |
| 3 | Hand | Handedness |
| 4 | Age | Age in years |
| 5 | EDUC | Years of education |
| 6 | SES | Socio Economic Status |
| 7 | MMSE | Mini Mental State Examination |
| 8 | CDR | Clinical Dementia Rating |
| 9 | eTIV | Estimated Total Intracranial Volume |
| 10 | nWBV | Normalize Whole Brain Volume |
| 11 | ASF | Atlas Scaling Factor |
| 12 | Delay | Delay |
Figure 1Proposed workflow.
Min, max, and median values of each attribute.
|
|
|
|
|
|---|---|---|---|
| EDUC | 7 | 22 | 14.2 |
| SES | 2 | 6 | 2.3 |
| MMSE | 16 | 30 | 26.2 |
| CDR | 0 | 1 | 0.3 |
| eTIV | 1,120 | 1,990 | 1,450 |
| nWBV | 0.55 | 0.81 | 0.7 |
| ASF | 0.87 | 1.43 | 1.3 |
Figure 2Representation of data splitting.
Figure 3Analysis of demented and non-demented rate based on gender, Gender group Female = 0, Male = 1.
Figure 4Analysis of MMSE scores for demented and non-demented group of patients.
Figure 5(A–C) Analysis of ASF, eTIV and nWBV for Demented and Non-demented group.
Figure 6Analysis on years of education.
Figure 7Analysis on people affected by demented and non-demented group based on age.
Performance comparison of different ML models.
|
|
|
|
|
|
|---|---|---|---|---|
| Decision tree classifier | 80.46% | 0.80 | 0.79 | 0.78 |
| Random forest classifier | 86.92% | 0.85 | 0.81 | 0.80 |
| Support vector machine | 81.67% | 0.77 | 0.70 | 0.79 |
| XGBoost | 85.92% | 0.85 | 0.83 | 0.85 |
| Voting classifier | 85.12% | 0.83 | 0.83 | 0.85 |
Figure 8Confusion matrix for decision tree.
Figure 13Confusion matrix for hard voting classifier.
Figure 14Comparison of accuracy.
Figure 17Comparison of F1 score.