| Literature DB >> 30791587 |
Aimilia Papagiannaki1, Evangelia I Zacharaki2, Gerasimos Kalouris3, Spyridon Kalogiannis4, Konstantinos Deltouzos5, John Ellul6, Vasileios Megalooikonomou7.
Abstract
The physiological monitoring of older people using wearable sensors has shown great potential in improving their quality of life and preventing undesired events related to their health status. Nevertheless, creating robust predictive models from data collected unobtrusively in home environments can be challenging, especially for vulnerable ageing population. Under that premise, we propose an activity recognition scheme for older people exploiting feature extraction and machine learning, along with heuristic computational solutions to address the challenges due to inconsistent measurements in non-standardized environments. In addition, we compare the customized pipeline with deep learning architectures, such as convolutional neural networks, applied to raw sensor data without any pre- or post-processing adjustments. The results demonstrate that the generalizable deep architectures can compensate for inconsistencies during data acquisition providing a valuable alternative.Entities:
Keywords: activity recognition; convolutional neural networks; deep learning; physiological monitoring; support vector machine (SVM) classification; wearable devices
Mesh:
Year: 2019 PMID: 30791587 PMCID: PMC6412200 DOI: 10.3390/s19040880
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1Pipeline of activity recognition methodology.
Figure 2The 3 investigated optimized deep network architectures (CNN1, CNN2, CNN3). Each convolutional layer is followed by a normalization layer and a rectified linear unit (ReLU) activation unit, which are not illustrated in the figure due to space limitations. The numbers before the character “@” indicate the depth dimension, whereas following the character “@” the size of the feature maps ().
Figure 3Acceleration signals while performing activities of daily living (ADLs).
Characteristics of recordings and data split for evaluation. Orientation consistency could not be visually determined across devices, and is therefore not marked.
| ID | WWBS | Probably Correct Orientation | Used for Training | Older Adults |
|---|---|---|---|---|
| 3087 | yes | yes | yes | yes |
| 3098 | yes | yes | yes | yes |
| 3104 | yes | yes | yes | yes |
| 3116 | yes | yes | yes | yes |
| 3117 | yes | yes | yes | yes |
| 3593 | yes | yes | yes | yes |
| 3600 | yes | yes | yes | yes |
| 3601 | yes | yes | yes | yes |
| 1117 | yes | yes | no | yes |
| 2101 | yes | yes | no | yes |
| 2113 | yes | yes | no | yes |
| 2615 | yes | yes | no | yes |
| 3084 | yes | yes | no | yes |
| 3091 | yes | yes | no | yes |
| 3112 | yes | yes | no | yes |
| 3118 | yes | yes | no | yes |
| 1507 | yes | no | no | yes |
| 1538 | yes | no | no | yes |
| 2094 | no | — | no | yes |
| 2102 | no | — | no | yes |
| 9000 | no | — | no | no |
| 9001 | no | — | no | no |
WWBS: wearable wireless body area network system.
Mean confusion matrix on the test set using the latest sensor device.
|
|
| |||||
| Classes | Sit/Stand | Laying | Walking | Walking up/down | Transition | |
| Sit/Stand | 96.08 | 0 | 0.76 | 0 | 3.16 | |
| Laying | 0 | 86.75 | 1.65 | 0 | 11.60 | |
| Walking | 8.26 | 0 | 74.33 | 1.56 | 15.85 | |
| Walking up/down | 0 | 0 | 100 | 0 | 0 | |
| Transition | 36.07 | 2.73 | 18.03 | 0 | 43.17 | |
Performance of the orientation-sensitive model and rotation-invariant (surrogate) model in case of recordings with axes’ rotation.
| Subject | Classification Accuracy % | Increased by % | |
|---|---|---|---|
| Orientation Sensitive Model | Surrogate Model | ||
| 1 | 19.4 | 60.2 | 40.8 |
| 2 | 19.6 | 60.4 | 40.8 |
| 3 | 23.1 | 72.6 | 49.5 |
| 4 | 29.8 | 78.0 | 48.2 |
| 5 | 25.3 | 64.5 | 39.3 |
| 6 | 7.0 | 69.2 | 62.1 |
| 7 | 7.2 | 87.1 | 79.9 |
| 8 | 25.4 | 78.5 | 53.2 |
Figure 4Samples distribution across classes.
Optimized hyper-parameters using SigOpt (SGD: stochastic gradient descent).
| Hyper-Parameters | Values | Contribution in the Model | |||||
|---|---|---|---|---|---|---|---|
| CNN1 | CNN2 | CNN3 | CNN1 | CNN2 | CNN3 | ||
| FrailSafe dataset | Batch | 100 | 100 | 59 | 2.14% | 1.50% | 1.62% |
| Dense Layer Size | 583 | 1000 | 773 | 1.17% | 1.82% | 1.92% | |
| Dropout prob. | 0.6 | 0.39 | 0.6 | 2.68% | 2.20% | 1.29% | |
| Epochs | 100 | 100 | 100 | 3.90% | 2.65% | 3.76% | |
| Filter 1 | 65 | 100 | 59 | 2.38% | 1.95% | 2.06% | |
| Filter 2 | 100 | 57 | 94 | 2.12% | 2.41% | 1.88% | |
| Filter 3 | 45 | 10 | 58 | 1.78% | 1.17% | 1.29% | |
| Learning rate | 0.0330 | 0.0480 | 0.1000 | 17.49% | 9.46% | 26.61% | |
| Regulariz. rate | 0.0030 | 0.0001 | 0.0001 | 16.83% | 9.32% | 32.63% | |
| Optimizer | SGD | SGD | SGD | 48.72% | 67.52% | 26.89% | |
Average classification accuracy across in 4-fold stratified cross-validation.
| CNN1 | CNN2 | CNN3 | |
|---|---|---|---|
| Test Accuracy | 81.91(±2.45) | 78.49(±3.66) | 82.47(±4.24) |
| Train Accuracy | 90.64(±1.34) | 90.86(±0.83) | 91.84(±1.17) |
Figure 5Average confusion matrices of classification of the test set using the 3 investigated deep network architectures.
Works on physical activity recognition of older people in real-world settings.
| Study | Sensor/Location | Measurement | Method | Cross-Val. | Inter-Subj. | Accuracy |
|---|---|---|---|---|---|---|
| Current study | IMU at sternum | acceler. | SVM | yes | yes | 81.7% |
| acceler., gyroscope, magnetometer | CNN3 | 82.47% | ||||
| [ | Smart watch | acceler., temperature, altitude | NNs, SVM | yes | no | 90.23% |
| [ | IMUs at sternum and thigh | orientation, acceler., angular velocity | Rule-based | no | no | 97.2% |
| [ | Instrumented shoes | foot loading, orientation, elevation | Decision Tree | no | no | 97.41% |
* Inter-subject analysis means the method is assessed on measurements from subjects not used during construction of the classification model. Cross-val.: cross-validation, Inter-subj.: inter-subject analysis*, IMU: inertia measurement unit