| Literature DB >> 34883795 |
Ge Gao1, Zhixin Li2, Zhan Huan2, Ying Chen2, Jiuzhen Liang1, Bangwen Zhou1, Chenhui Dong2.
Abstract
With the rapid development of the computer and sensor field, inertial sensor data have been widely used in human activity recognition. At present, most relevant studies divide human activities into basic actions and transitional actions, in which basic actions are classified by unified features, while transitional actions usually use context information to determine the category. For the existing single method that cannot well realize human activity recognition, this paper proposes a human activity classification and recognition model based on smartphone inertial sensor data. The model fully considers the feature differences of different properties of actions, uses a fixed sliding window to segment the human activity data of inertial sensors with different attributes and, finally, extracts the features and recognizes them on different classifiers. The experimental results show that dynamic and transitional actions could obtain the best recognition performance on support vector machines, while static actions could obtain better classification effects on ensemble classifiers; as for feature selection, the frequency-domain feature used in dynamic action had a high recognition rate, up to 99.35%. When time-domain features were used for static and transitional actions, higher recognition rates were obtained, 98.40% and 91.98%, respectively.Entities:
Keywords: classifier selection; frequency-domain characteristics; human activity recognition; sliding window segmentation; wearable sensor
Mesh:
Year: 2021 PMID: 34883795 PMCID: PMC8659462 DOI: 10.3390/s21237791
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1Schemes follow the same formatting. If there are multiple panels, they should be listed as (a) traditional HAR recognition model and (b) multi-feature and multi-classifier action recognition model.
Time-domain and frequency-domain features extracted from each window.
| Feature | |
|---|---|
| Time | Max, Min, Range, Mean, Standard deviation |
| Frequency [ | Center of gravity frequency, Mean square frequency, Root mean square frequency, |
Figure 2Percentage of different action categories in dataset.
Figure 3Feature extraction and classifier training of the model.
Human daily actions.
| Traditional Classification | Actions | Actions Description | Our Classification |
|---|---|---|---|
| Basic action | A1 | Walking | Dynamic action |
| A2 | Walking Upstairs | ||
| A3 | Walking Downstairs | ||
| A4 | Siting | Static action | |
| A5 | Standing | ||
| A6 | Laying | ||
| Transitional action | A7 | Stand-to-Sit | Transitional action |
| A8 | Sit-to-Stand | ||
| A9 | Sit-to-Lie | ||
| A10 | Lie-to-Sit | ||
| A11 | Stand-to-Lie | ||
| A12 | Lie-to-Stand |
Comparison of classification accuracy for basic actions using different methods.
| Reference | Classifier | Accuracy | Activities | Subject | Sensors | Features |
|---|---|---|---|---|---|---|
| Literature [ | DCNN | 94.18% | Sit, Stand, Lie, | 20 | Three-axis accelerometer, | 248 |
| FRDCNN | 95.27% | |||||
| Literature [ | DT | 93.44% | Sit, Stand, Lie, | 30 | There-axis accelerometer and | 561 |
| RF | 96.73% | |||||
| KNN | 96.21% | |||||
| LR | 98.40% | |||||
| SVM | 93.86% | |||||
| ECLF | 97.60% | |||||
| method3 | DT | 93% | Sit, Stand, Lie, | 30 | There-axis accelerometer and |
|
| RF | 96.13% | |||||
| KNN | 90% | |||||
| LR | 82% | |||||
|
|
| |||||
| ECLF | 97.18% |
Performance evaluation of each activity based on SVM classifier.
| Action | Precision | Recall | F1-Score |
|---|---|---|---|
| A1 | 99.70% | 98.24% | 98.96% |
| A2 | 98.79% | 99.69% | 99.24% |
| A3 | 98.34% | 98.67% | 98.50% |
| A4 | 91.81% | 91.04% | 91.42% |
| A5 | 91.04% | 92.07% | 91.55% |
| A6 | 100% | 100% | 100% |
Figure 4Basic action confusion matrix.
Figure 5(a) Basic action segmentation sketch. (b) Traditional action segmentation sketch.
Figure 6Accuracy comparison of the same features on different classifiers.
Figure 7Action classification results under different classifiers. (a) Classification effect of action features on DT. (b) Classification effect of action features on LDA. (c) Classification effect of action features on SVM. (d) Classification effect of action features on KNN. (e) Classification effect of action features on EL.
Figure 8(a) Comparison of recall rates of dynamic actions under different characteristics. (b) Comparison of recall rates of static actions under different characteristics. (c) Comparison of recall rates of transition actions under different characteristics.
Comparison of precision rate between the proposed method and other methods.
| Precision (%) | A1 | A2 | A3 | A4 | A5 | A6 | A7 | A8 | A9 | A10 | A11 | A12 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| AW-TD [ | 99.7 | × | × | 98.1 | 97.4 | × | 68.5 | 58.7 | 90.6 | 86.6 | × | × |
| Literature [ | 99.0 | 100 | 96.6 | 98.6 | 98.8 | 99.3 | 100 | 100 | 89.6 | 100 | 77.9 | 100 |
| Our method |
| 100 |
| 98.1 | 98.6 |
| 96 | 100 |
| 92.6 |
| 96.2 |
Comparison of recall rate between the proposed method and other methods.
| Recall (%) | A1 | A2 | A3 | A4 | A5 | A6 | A7 | A8 | A9 | A10 | A11 | A12 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| AW-TD [ | 96.3 | × | × | 90.6 | 99.2 | × | 89.2 | 86.0 | 92.9 | 89.2 | × | × |
| Literature [ | 100 | 95.6 | 99.7 | 99.8 | 98.7 | 98.1 | 94.7 | 79.1 | 100 | 87.1 | 100 | 93.1 |
| Our method | 99.4 |
| 99.6 | 98.4 | 98.2 |
|
|
| 88.2 |
| 95.7 | 89.3 |
Summary of this work.
| Action | Best Features | Best Classifier |
|---|---|---|
| Dynamic action | Frequency-domain | SVM |
| Static action | Time-domain | EL |
| Transition action | Time-domain | SVM |