| Literature DB >> 31052314 |
Yiming Tian1,2, Xitai Wang3,4, Lingling Chen5, Zuojun Liu6.
Abstract
Sensor-based human activity recognition can benefit a variety of applications such as health care, fitness, smart homes, rehabilitation training, and so forth. In this paper, we propose a novel two-layer diversity-enhanced multiclassifier recognition method for single wearable accelerometer-based human activity recognition, which contains data-based and classifier-based diversity enhancement. Firstly, we introduce the kernel Fisher discriminant analysis (KFDA) technique to spatially transform the training samples and enhance the discrimination between activities. In addition, bootstrap resampling is utilized to increase the diversities of the dataset for training the base classifiers in the multiclassifier system. Secondly, a combined diversity measure for selecting the base classifiers with excellent performance and large diversity is proposed to optimize the performance of the multiclassifier system. Lastly, majority voting is utilized to combine the preferred base classifiers. Experiments showed that the data-based diversity enhancement can improve the discriminance of different activity samples and promote the generation of base classifiers with different structures and performances. Compared with random selection and traditional ensemble methods, including Bagging and Adaboost, the proposed method achieved 92.3% accuracy and 90.7% recall, which demonstrates better performance in activity recognition.Entities:
Keywords: activity recognition; classifier ensembles; kernel Fisher discriminant analysis; multiclassifier design and evaluation; wearable sensor
Mesh:
Year: 2019 PMID: 31052314 PMCID: PMC6539368 DOI: 10.3390/s19092039
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Summary of notable activity recognition studies using wearable sensors on multiclassifier schemes and features.
| Author | Year | Activities (Number Studied) | Classifier and Accuracy | Contribution |
|---|---|---|---|---|
| Catal [ | 2015 | Walking, upstairs, downstairs, sitting, jogging, and standing (6) | Ensemble J48 decision tree, multilayer perceptron (MLP) and logistic regression (72.73%–98.7%) | Examining the power of ensemble of classifiers for activity recognition |
| Lee [ | 2014 | Still, walk, and run (3) | Mixture-of-experts (ME) model (92.56% ± 1.05%) | The global–local cotraining algorithm was used to train the ME model |
| Yuan [ | 2014 | Walking, running, standing, ascending and descending stairs (5) | Average combining extreme learning machine (ELM) (95.02%) | A novel ensemble learning algorithm was proposed |
| Cao [ | 2018 | Daily and sports activities dataset (18) | ELM-based ensemble pruning for sports activities dataset (0.7848 ± 0.0077), | Optimizing multisensor deployment by ensemble pruning |
| Bayat [ | 2014 | Slow-walk, fast-walk, aerobic dancing, stairs-up, stairs-down (5) | MLP, LogitBoost, and SVM classifiers (91.15%) | Investigating different fusion methods to obtain an optimal set of classifiers |
| Ronao [ | 2016 | Stand, walk, stair up, stair down, run, and lying (6) | Deep convolutional neural network; 94.79% accuracy with raw sensor data | Exploiting the inherent characteristics of activities by smartphone sensors |
| Khan [ | 2010 | Three activity states including activities such as walking, standing, etc. (15) | Artificial neural nets (97.9%) | Linear discriminant analysis and a hierarchical approach |
| Hassan [ | 2018 | Activities including standing, sitting, walking, lying down, stand-to-sit, etc. (12) | Deep belief network (DBN) (97.5%) | Kernel principal component analysis and linear discriminant analysis were performed to obtain more robust features |
| Chen [ | 2012 | Daily activities including staying still, walking, running, going upstairs, and going downstairs (5) | ELM (79.68%) | Principal component analysis and ELM were utilized to realize location-adaptive activity recognition |
| Wang [ | 2016 | Walking, upstairs, downstairs, sitting, standing, and lying (6) | k-Nearest Neighbor, KNN (87.8%) | Hybrid feature selection method for smart-phone-based activity recognition |
| Tao [ | 2016 | Jumping, running, walking, step walking, walking quickly, down stairs, up stairs (7) | A new ensemble classifier termed multicolumn bidirectional long short-term memory (BLSTM); average error rates: 10.6% | Two-directional feature for BLSTM-based activity recognition |
| Wang [ | 2016 | Standing, walking jumping, bicycling, etc. (9) | KNN with 21 features (76.42%) | Game-theory-based feature selection was used for selecting distinguished features |
Figure 1Workflow of the proposed activity recognition approach.
Figure 2Block diagram for feature extraction in our activity recognition approach.
Figure 3Features without and with kernel Fisher discriminant analysis (KFDA) operations. (a) 3D feature space representation on original features (x1 is the mean value of the y-axis, x2 is the standard deviation of the x-axis, x3 is the standard deviation of the y-axis); (b) 3D space representation of the first three KFDA-based features.
Figure 4Dataset after being processed by bootstrap resampling.
Figure 5The structure of single hidden layer feed-forward neural network.
Figure 6Comparison of the combination method with classifier selection or not: (a) direct combination method and (b) selection-based combination method.
Figure 7Flowchart of the proposed classifier selection method.
Figure 8The experimental platform and sensor placement: (a) signal acquisition device, (b) the placements of the collection node, and (c) the process of data collection.
The statistics of subjects for the experiments.
| Age | Height (cm) | Weight (kg) | |
|---|---|---|---|
| Range | 20–38 | 160–178 | 45–85 |
| Mean | 29.6 | 166 | 65.6 |
| Std | 6.7 | 5.6 | 13.5 |
Activities performed in the experiments.
| Activity Number | Sum (in Seconds) | Activity Number | Sum (in Seconds) |
|---|---|---|---|
| 1 walk (W) | 1342 | 5 go up stairs (GU) | 1123 |
| 2 stand (S) | 1253 | 6 sit on a chair (SC) | 879 |
| 3 jump (J) | 976 | 7 run forward (R) | 1143 |
| 4 go down stairs (GD) | 1034 | 8 lie (L) | 769 |
Figure 9The triaxial accelerometer data of “go up stairs” from three subjects with large individual differences: (a) Male, 177 cm, 83 kg; (b) Female, 162 cm, 45 kg; (c) Male, 172 cm, 60 kg.
Figure 10Performance comparison: (a) accuracy comparison of the different feature selection methods and (b) recall comparison of the different feature selection methods.
Confusion matrix for human activity recognition (HAR) using 20 base classifiers based on principal component analysis (PCA) features.
| W | S | J | GD | GU | SC | R | L | |
|---|---|---|---|---|---|---|---|---|
| W | 458 | 6 | 6 | 28 | 24 | 17 | 19 | 6 |
| S | 5 | 449 | 4 | 6 | 6 | 1 | 10 | 2 |
| J | 9 | 6 | 371 | 22 | 34 | 12 | 12 | 7 |
| GD | 31 | 6 | 17 | 399 | 5 | 17 | 11 | 3 |
| GU | 21 | 6 | 26 | 4 | 395 | 4 | 11 | 5 |
| SC | 13 | 3 | 9 | 10 | 2 | 432 | 15 | 3 |
| R | 15 | 9 | 13 | 9 | 14 | 16 | 441 | 5 |
| L | 2 | 1 | 0 | 0 | 1 | 3 | 2 | 350 |
Confusion matrix for HAR using 20 base classifiers based on Fisher discriminant analysis (FDA) features.
| W | S | J | GD | GU | SC | R | L | |
|---|---|---|---|---|---|---|---|---|
| W | 528 | 2 | 3 | 13 | 8 | 5 | 2 | 3 |
| S | 3 | 458 | 2 | 3 | 4 | 3 | 8 | 2 |
| J | 2 | 1 | 447 | 6 | 8 | 5 | 2 | 2 |
| GD | 11 | 2 | 5 | 452 | 3 | 10 | 5 | 1 |
| GU | 14 | 7 | 9 | 2 | 432 | 1 | 6 | 1 |
| SC | 9 | 4 | 4 | 12 | 2 | 442 | 9 | 5 |
| R | 6 | 8 | 2 | 4 | 7 | 14 | 476 | 5 |
| L | 1 | 1 | 1 | 0 | 1 | 1 | 2 | 352 |
Confusion matrix for HAR using 20 base classifiers based on KFDA features.
| W | S | J | GD | GU | SC | R | L | |
|---|---|---|---|---|---|---|---|---|
| W | 531 | 2 | 3 | 10 | 8 | 5 | 2 | 3 |
| S | 3 | 461 | 2 | 3 | 4 | 2 | 6 | 2 |
| J | 2 | 1 | 450 | 5 | 6 | 5 | 2 | 2 |
| GD | 10 | 2 | 4 | 456 | 3 | 9 | 4 | 1 |
| GU | 12 | 5 | 8 | 2 | 439 | 1 | 5 | 0 |
| SC | 8 | 3 | 3 | 12 | 1 | 446 | 9 | 5 |
| R | 5 | 9 | 3 | 2 | 6 | 12 | 480 | 5 |
| L | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 353 |
Figure 11The number of hidden neurons in the base classifiers.
Figure 12The accuracy and recall for each base classifier: (a) the accuracy of each base classifier in training data, (b) the recall of each base classifier in training data, (c) the accuracy of each base classifier in testing data, and (d) the recall of each base classifier in testing data.
Diversity values for base classifiers in the multiclassifier system.
|
|
|
|
|
|
|
|
|
|
|
|
| Diversity | 0.634 | 0.275 | 0.876 | 0.403 | 0.852 | 0.605 | 0.247 | 0 | 0.284 | 0.786 |
| Ranking | 8 | 19 | 4 | 13 | 5 | 9 | 20 | 1 | 18 | 6 |
|
|
|
|
|
|
|
|
|
|
|
|
| Diversity | 0.389 | 0.685 | 0.372 | 0.968 | 0.417 | 0.322 | 0.587 | 0.914 | 0.462 | 0.303 |
| Ranking | 14 | 7 | 15 | 2 | 12 | 16 | 10 | 3 | 11 | 17 |
The performance of fusion for all and selection of 15, 10, and 5 classifiers.
| Combination Rule | Nr Classifiers | Accuracy% | Recall% |
|---|---|---|---|
| Fusion | 20 | 93.15 | 92.35 |
| Selection | 15 | 93.08 | 92.78 |
| Selection | 10 | 93.37 | 93.17 |
| Selection | 5 | 89.68 | 88.68 |
| Random | 15 | 84.68 | 84.32 |
| Random | 10 | 85.56 | 84.47 |
| Random | 5 | 82.43 | 81.67 |
Figure 13Performance of the multiclassifier system under different numbers of base classifiers: (a) accuracy of the multiclassifier system and (b) recall of the multiclassifier system.
Recognition performance comparison of different methods.
| Method | Best Base ELM | SVM | Bagging | Adaboost | Proposed Method |
|---|---|---|---|---|---|
| Number of classifiers | 1 | 1 | 11 | 11 | 11 |
| Accuracy % | 81.85 | 83.42 | 85.38 | 88.63 | 94.28 |
| Recall % | 80.18 | 83.29 | 84.72 | 87.69 | 93.89 |