| Literature DB >> 30501111 |
Samanta Rosati1, Gabriella Balestra2, Marco Knaflitz3.
Abstract
Human Activity Recognition (HAR) refers to an emerging area of interest for medical, military, and security applications. However, the identification of the features to be used for activity classification and recognition is still an open point. The aim of this study was to compare two different feature sets for HAR. Particularly, we compared a set including time, frequency, and time-frequency domain features widely used in literature (FeatSet_A) with a set of time-domain features derived by considering the physical meaning of the acquired signals (FeatSet_B). The comparison of the two sets were based on the performances obtained using four machine learning classifiers. Sixty-one healthy subjects were asked to perform seven different daily activities wearing a MIMU-based device. Each signal was segmented using a 5-s window and for each window, 222 and 221 variables were extracted for the FeatSet_A and FeatSet_B respectively. Each set was reduced using a Genetic Algorithm (GA) simultaneously performing feature selection and classifier optimization. Our results showed that Support Vector Machine achieved the highest performances using both sets (97.1% and 96.7% for FeatSet_A and FeatSet_B respectively). However, FeatSet_B allows to better understand alterations of the biomechanical behavior in more complex situations, such as when applied to pathological subjects.Entities:
Keywords: MIMU; classifier optimization; feature selection; genetic algorithm; human activity recognition; machine learning; wearable sensors
Mesh:
Year: 2018 PMID: 30501111 PMCID: PMC6308535 DOI: 10.3390/s18124189
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1Example of signals acquired by (a) accelerometer, (b) gyroscope and (c) magnetometer during 5 s of walking of a healthy subject.
Figure 2Example of signals acquired by accelerometer (left panels) and gyroscope (right panels) during 5 s of upright standing (panels (a,b)) and walking (panels (c,d)) of a healthy subject.
GA results for each classifier and for the two feature sets.
| Classifier | # of Selected Features | Classifier Parameters | Accuracy on Training Set | Accuracy on | ||||
|---|---|---|---|---|---|---|---|---|
|
|
|
|
|
|
|
|
| |
|
| 106 | 132 | 87.7% | 86.6% | 87.7% | 86.1% | ||
|
| 114 | 138 | 91.7% | 49.7% | 89.7% | 48.5% | ||
|
| 118 | 133 | 100.0% | 99.9% | 98.5% | 96.4% | ||
|
| 151 | 103 | None | None | 97.7% | 97.1% | 85.9% | 82.7% |
Figure 3Mean accuracy (bar) and standard error (whisker) across the 61 subjects involved in the study for each dynamic activity (level walking (A3), ascending and descending stairs (A4 and A5), uphill and downhill walking (A6 and A7)), after post-processing. Four classifiers are analyzed: (a) K-Nearest Neighbors; (b) Feedforward Neural Networks; (c) Support Vector Machine; (d) Decision Tree. Asterisks (*) mark significant differences between accuracies reached by FeatSet_A and FeatSet_B (p-value < 0.05).
Figure 4Mean accuracy (panel (a)) and F1-score (panel (b)) of the four classifiers across the seven activities (both static and dynamic activities), after post-processing, for the two sets of features.