| Literature DB >> 31509999 |
Nader Naghavi1, Aaron Miller2, Eric Wade3.
Abstract
Freezing of gait (FoG) is a common motor symptom in patients with Parkinson's disease (PD). FoG impairs gait initiation and walking and increases fall risk. Intelligent external cueing systems implementing FoG detection algorithms have been developed to help patients recover gait after freezing. However, predicting FoG before its occurrence enables preemptive cueing and may prevent FoG. Such prediction remains challenging given the relative infrequency of freezing compared to non-freezing events. In this study, we investigated the ability of individual and ensemble classifiers to predict FoG. We also studied the effect of the ADAptive SYNthetic (ADASYN) sampling algorithm and classification cost on classifier performance. Eighteen PD patients performed a series of daily walking tasks wearing accelerometers on their ankles, with nine experiencing FoG. The ensemble classifier formed by Support Vector Machines, K-Nearest Neighbors, and Multi-Layer Perceptron using bagging techniques demonstrated highest performance (F1 = 90.7) when synthetic FoG samples were added to the training set and class cost was set as twice that of normal gait. The model identified 97.4% of the events, with 66.7% being predicted. This study demonstrates our algorithm's potential for accurate prediction of gait events and the provision of preventive cueing in spite of limited event frequency.Entities:
Keywords: ADASYN; Parkinson’s disease; cost of classification; data synthesis; ensemble classifier; freezing of gait; wearable sensors
Mesh:
Year: 2019 PMID: 31509999 PMCID: PMC6767263 DOI: 10.3390/s19183898
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1Experiment layout. Number of obstacles in the object area varied between 0, 1 and 2. The width of walking path in the object area (w) varied between 150% and 100% of shoulder width of participants.
Set of features extracted from each window of data.
| Feature | Description |
|---|---|
| Freeze Index | The power in freeze band (3–8 Hz) divided by the power in locomotion band (0.5–3 Hz) using FFT of the acceleration signal [ |
| Sample Entropy | The negative logarithm of the ratio of conditional probability of data subsets of length |
| Power | Total power in the freeze and locomotion bands (0.5–8 Hz) of the signal |
| Standard Deviation | Mean deviation of data points compared to the average |
Figure 2The process of creating samples from acceleration signals. (a) Extracting features from six successive windows at time t = T (left) and the next time step, t = (right). Red highlighted area shows FoG labeled period using recorded videos, green boxes show length of windows used to extract features from acceleration signal; (b) Combining arrays of features from different combinations of sensor-axis.
Confusion matrix (NG: normal gait, FoG: freezing of gait).
| Predicted | |||
|---|---|---|---|
| NG | FoG | ||
|
|
| True Negative (TN) | False Positive (FP) |
|
| False Negative (FN) | True Positive (TP) | |
Performance of classifiers in patient dependent models using the original imbalanced dataset and equal cost of misclassification (Results show average performance of all seven participants). The bold values show the highest performance among all the classifiers.
| Classifier | Sensitivity (%) | Specificity (%) | F1 (%) |
|---|---|---|---|
| SVM |
| 92.3 | 84.9 |
| KNN | 82.0 | 95.3 | 86.0 |
| MLP | 82.4 | 94.5 | 85.4 |
| ClsfBagging | 85.2 |
|
|
| ClsfBoost | 85.1 | 94.2 | 85.8 |
| AdaBoost | 82.0 | 94.6 | 82.8 |
| TreeBagger | 83.9 | 95.4 | 85.8 |
| RandomForest | 82.9 | 95.4 | 85.6 |
Figure 3Performance measures of classifiers for patient-dependent models using synthetic data and cost of misclassification.
Best performing classifiers in patient dependent models (Results show average performance of all seven participants). The bold values show the highest performance among all the classifiers.
| Classifier |
|
|
| Sensitivity (%) | Specificity (%) | F1 (%) |
|---|---|---|---|---|---|---|
| KNN | 1 | 3 | 1 |
| 88.6 | 83.4 |
| KNN | 1 | 1 | 0 | 78.5 |
| 83.9 |
| ClsfBagging | 1 | 2 | 0.2 | 90.8 | 95.0 |
|
| ClsfBagging | 1 | 1 | 0.2 | 90.5 | 95.5 |
|
| ClsfBagging | 1 | 2 | 0.5 | 90.2 | 94.7 |
|
Performance of classifiers in patient independent models using the original imbalanced dataset and equal cost of misclassification (Results show average performance of all seven participants). The bold values show the highest performance among all the classifiers.
| Classifier | Sensitivity (%) | Specificity (%) | F1 (%) |
|---|---|---|---|
| SVM | 76.9 | 87.5 | 71.3 |
| KNN | 68.2 | 93.3 | 69.6 |
| MLP | 67.9 | 89.8 | 63.8 |
| ClsfBagging | 72.1 |
|
|
| ClsfBoost | 76.9 | 87.5 | 71.3 |
| AdaBoost | 73.8 | 92.6 | 72.2 |
| TreeBagger | 76.5 | 90.2 | 72.9 |
| RandomForest |
| 91.7 |
|
Figure 4Performance measures of classifiers for patient-independent models using synthetic data and cost of misclassification.
Best performing classifiers in patient independent models (Results show average performance of all seven participants). The bold values show the highest performance among all the classifiers.
| Classifier |
|
|
| Sensitivity (%) | Specificity (%) | F1 (%) |
|---|---|---|---|---|---|---|
| KNN | 1 | 3 | 1 |
| 82.2 | 67.0 |
| ClsfBagging | 1 | 1 | 0 | 72.1 |
| 74.5 |
| ClsfBagging | 1 | 1 | 0.5 | 83.3 | 90.1 |
|
| RandomForest | 1 | 2 | 0.2 | 83.8 | 90.2 |
|
| TreeBagger | 1 | 2 | 0 | 81.3 | 91.4 |
|
Figure 5Acceleration signal collected from the right ankle sensor and the corresponding labeled and detected events using of ClsfBagging with and .
Figure 6Average FoG detection latency of ClsfBagging in patient-dependent models with and . Negative values of time represent duration before FoG onset.