| Literature DB >> 34109317 |
SeyyedPooya HekmatiAthar1, Hilda Goins1, Raymond Samuel2, Grace Byfield3, Mohd Anwar1.
Abstract
The World Health Organization estimates that approximately 10 million people are newly diagnosed with dementia each year and a global prevalence of nearly 50 million persons with dementia (PwD). The vast majority of PwD living at home receive the majority of their care from informal familial caregivers. The quality of life (QOL) of familial caregivers may be significantly impacted by their caregiving responsibilities and resultant caregiver burden. A major contributor to caregiver burden is the random occurrence of agitation in PwD and familial caregivers' lack of preparedness to manage these episodes. Caregiver burden may be reduced if it is possible to forecast impending agitation episodes. In this study, we leverage data-driven deep learning models to predict agitation episodes in PwD. We used Long Short-Term Memory (LSTM), a deep learning class of algorithms, to forecast agitations up to 30 min before actual agitation events. In particular, we managed the missing data by estimating the missing values and compensated for the class imbalance challenge by down-sampling the majority class. The simulations were based on real-world data from Alzheimer's disease (AD) caregivers and PwD dyads home environments, including ambient noise level, illumination, room temperature, atmospheric pressure (Pa), and relative humidity. Our results show the efficacy of data-driven deep learning models in predicting agitation episodes in community-dwelling AD dyads with accuracy of 98.6% and recall (sensitivity) of 84.8%.Entities:
Keywords: Agitation; Caregiver burden; Data-driven forecasting; Deep learning models; Long Short-Term Memory (LSTM); Persons with dementia (PwD)
Year: 2021 PMID: 34109317 PMCID: PMC8179095 DOI: 10.1007/s42979-021-00708-3
Source DB: PubMed Journal: SN Comput Sci ISSN: 2661-8907
Fig. 1Reference points for the estimation of missing values
Fig. 2Left: before applying Eq. 1, missing values are filled with zeros. Right: after applying Eq. 1
Summary of trained models
| No dimension reduction | ||||
|---|---|---|---|---|
| Model 1 | Model 2 | Model 3 | Model 4 | |
| Model topology | MLP | LSTM | MLP | LSTM |
| Inputs | 600 | 20 | 300—reduced with PCA | 10—reduced with PCA |
| Number of memory blocks | N/A | 32 | N/A | 32 |
| Neurons in hidden layers | 10 (* 2 layers) | 10 | 10 (* 2 layers) | 10 |
| Loss function | MSE | MSE | MSE | MSE |
| Loss value after training | 0.07518997 | 0.11668854 | 0.06723165 | 0.06448663 |
| Optimizer | Adam | Adam | Adam | Adam |
| Number of parameters | 6131 | 7125 | 3131 | 5845 |
Fig. 3The employed pipeline for training models
Results of trained models
| No dimension reduction | ||||
|---|---|---|---|---|
| Model 1—MLP | Model 2—LSTM | Model 3—MLP | Model 4—LSTM | |
| Accuracy | 0.984125 | 0.967552 | 0.986319 | 0.98687 |
| Precision | 0.375 | 0.259108 | 0.536946 | 0.51581 |
| Recall | 0.219334 | 0.722177 | 0.088546 | 0.848091 |
| F1-score | 0.276781 | 0.381381 | 0.152022 | 0.641475 |
| False negative | 961 | 342 | 1122 | 187 |
| False positive | 450 | 2542 | 94 | 980 |
| True negative | 87,200 | 85,108 | 87,556 | 86,670 |
| True positive | 270 | 889 | 109 | 1044 |
Fig. 4Comparison of models’ performances on different datasets