| Literature DB >> 35591091 |
Paola Patricia Ariza-Colpas1,2, Enrico Vicario3, Ana Isabel Oviedo-Carrascal2, Shariq Butt Aziz4, Marlon Alberto Piñeres-Melo5, Alejandra Quintero-Linero6, Fulvio Patara3.
Abstract
The Assisted Living Environments Research Area-AAL (Ambient Assisted Living), focuses on generating innovative technology, products, and services to assist, medical care and rehabilitation to older adults, to increase the time in which these people can live. independently, whether they suffer from neurodegenerative diseases or some disability. This important area is responsible for the development of activity recognition systems-ARS (Activity Recognition Systems), which is a valuable tool when it comes to identifying the type of activity carried out by older adults, to provide them with assistance. that allows you to carry out your daily activities with complete normality. This article aims to show the review of the literature and the evolution of the different techniques for processing this type of data from supervised, unsupervised, ensembled learning, deep learning, reinforcement learning, transfer learning, and metaheuristics approach applied to this sector of science. health, showing the metrics of recent experiments for researchers in this area of knowledge. As a result of this article, it can be identified that models based on reinforcement or transfer learning constitute a good line of work for the processing and analysis of human recognition activities.Entities:
Keywords: activities of daily living—ADL; activity recognition systems—ARS; ambient assisted living—AAL; clustering; deep learning; ensemble learning; human activity recognition—HAR; reinforcement learning; supervised learning; unsupervised activity recognition; unsupervised learning
Mesh:
Year: 2022 PMID: 35591091 PMCID: PMC9103712 DOI: 10.3390/s22093401
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.847
Figure 1HAR Approach Concepts Maps.
Clustering’s methods and applications.
| Method | Strategy | Applications | |
|---|---|---|---|
| Hierarchical | Agglomerative | Nearest Neighbor [ | |
| Divisive | |||
| Non-Hierarchical | Reassignment | Centroids | K-means [ |
| Medioid | k-medioids [ | ||
| Density | Dynamic clouds [ | ||
| Typological approximation | Modal Analysis [ | ||
| Probabilistic approximation | Wolf Methods [ | ||
| Direct | Block Clustering [ | ||
| Reductive | Type Q Factor Analysis [ | ||
Association Rules Evolutions.
| Based in | Algorithms |
|---|---|
| Frequent Itemsets Mining | Apriori [ |
| Apriori-TID [ | |
| ECLAT TID-list [ | |
| FP-Growth [ | |
| Big Data Algorithms | R-Apriori [ |
| YAFIM [ | |
| ParEclat [ | |
| Par-FP (Parallel FP-Growth with Sampling) [ | |
| HPA (Hash Partitioned Apriori) [ | |
| Distributed algorithms | PEAR (Parallel Efficient Association Rules) [ |
| Distributed algorithms for fuzzy association rule mining | Count Distribution algorithm [ |
Figure 2Sensors of Human Activity Recognition.
Figure 3Relationship between concepts for the literature review.
Figure 4Years of publication of the articles.
Figure 5Years of publication of the articles.
Figure 6(a) Publications division according to typology (b). Distribution by quartiles of publications.
Supervised Techniques results.
| Dataset | Technique | Metrics | References | |||
|---|---|---|---|---|---|---|
| Accuracy | Precision | Recall | F-Measure | |||
| UCI Machine Learning | Nearest Neighbor | 75.7 | - | - | - | [ |
| Decision Tree | 76.3 | - | - | - | ||
| Random Forest | 75.9 | - | - | - | ||
| Naive Bayes | 76.9 | - | - | - | ||
| Aras (House A) | MSA (Margin Setting Algorithm) | 68.85 | - | - | - | [ |
| SVM | 66.90 | - | - | - | ||
| ANN | 67.32 | - | - | - | ||
| Aras (House B) | MSA (Margin Setting Algorithm) | 96.24 | - | - | - | |
| SVM | 94.81 | - | - | - | ||
| ANN | 95.42 | - | - | - | ||
| CASAS Tulum | MSA (Margin Setting Algorithm) | 68.00 | - | - | - | |
| SVM | 66.6 | - | - | - | ||
| ANN | 67.37 | - | - | - | ||
| Mhealth | K-NN | 99.64 | - | - | 99.7 | [ |
| ANN | 99.55 | - | - | 99.6 | ||
| SVM | 99.89 | - | - | 100 | ||
| C4.5 | 99.32 | - | - | 99.3 | ||
| CART | 99.13 | - | - | 99.7 | ||
| Random Forest | 99.89 | - | - | 99.89 | ||
| Rotation Forest | 99.79 | - | - | 99.79 | ||
| WISDM, SCUT_NA-A | Sliding window with variable size, S transform, and regularization based robust subspace (SRRS) for selection and SVM for Classification | 96.1 | - | - | - | [ |
| SCUT NA-A | Sliding window with fixed samples, SVM like a classifier, cross-validation | 91.21 | - | - | - | |
| PAMPA2, Mhealth | Sliding windows with fixed 2s, SVM, and Cross-validation | 84.10 | - | - | - | |
| SBHAR | Sliding windows with fixed 4s, SVM, and Cross-validation | 93.4 | - | - | - | |
| WISDM | MLP based on voting techniques with nb-Tree are used | 96.35 | - | - | - | |
| UTD-MHAD | Feature level fusion approach& collaborative representation classifier | 79.1 | - | - | - | |
| Groupware | Mark Hall’s feature selection and Decision Tree | 99.4 | - | - | - | |
| Free-living | k-NN and Decision Tree | 95 | - | - | - | |
| WISDM, Skoda | Hybrid Localizing learning (k-NN-LSS-VM) | 81 | - | - | - | |
| UniMiB SHAR | LSTM and Deep Q-Learning | 95 | - | - | - | |
| Groupware | Sliding windows Gaussian Linear Filter and NB classifier | 89.5 | - | - | - | |
| Groupware | Sliding windows Gaussian Linear Filter and Decision Tree classifier | 99.99 | - | - | - | |
| CSI-data | SVM | 96 | - | - | - | [ |
| LSTM | 89 | - | - | - | ||
| Built by the authors | IBK | 95 | - | - | - | [ |
| Classifier based ensemble | 98 | - | - | - | ||
| Bayesian network | 63 | - | - | - | ||
| Built by the authors | Decision Tree | 91.08 | - | - | 89.75 | [ |
| Random Forest | 91.25 | - | - | 90.02 | ||
| Gradient Boosting | 97.59 | - | - | 97.4 | ||
| KNN | 93.76 | - | - | 93.21 | ||
| Naive Bayes | 88.57 | - | - | 88.07 | ||
| SVM | 92.7 | - | - | 91.53 | ||
| XGBoost | 96.93 | - | - | 96.63 | ||
| UK-DALE | FFNN | 95.28 | - | - | - | [ |
| SVM | 93.84 | - | - | - | ||
| LSTM | 83.07 | - | - | - | ||
| UCI Machine Learning | KNN | 90.74 | 91.15 | 90.28 | 90.45 | [ |
| SVM | 96.27 | 96.43 | 96.14 | 96.23 | ||
| HMM+SVM | 96.57 | 96.74 | 96.49 | 96.56 | ||
| SVM+KNN | 96.71 | 96.75 | 96.69 | 96.71 | ||
| Naive Bayes | 77.03 | 79.25 | 76.91 | 76.72 | ||
| Logistic Reg | 95.93 | 96.13 | 95.84 | 95.92 | ||
| Decision Tree | 87.34 | 87.39 | 86.95 | 86.99 | ||
| Random Forest | 92.3 | 92.4 | 92.03 | 92.14 | ||
| MLP | 95.25 | 95.49 | 95.13 | 95.25 | ||
| DNN | 96.81 | 96.95 | 96.77 | 96.83 | ||
| LSTM | 91.08 | 91.38 | 91.24 | 91.13 | ||
| CNN+LSTM | 93.08 | 93.17 | 93.10 | 93.07 | ||
| CNN+BiLSTM | 95.42 | 96.58 | 95.26 | 95.36 | ||
| Inception+ResNet | 95.76 | 96.06 | 95.63 | 95.75 | ||
| UCI Machine Learning | NB-NB | 73.68 | - | - | 46.9 | [ |
| NB-KNN | 85.58 | - | - | 61.08 | ||
| NB-DT | 89.93 | - | - | 69.75 | ||
| NB-SVM | 79.97 | - | - | 53.69 | ||
| KNN-NB | 74.93 | - | - | 45 | ||
| KNN-KNN | 79.3 | - | - | 49.82 | ||
| KNN-DT | 87.01 | - | - | 60.98 | ||
| KNN-SVM | 82.24 | - | - | 53.1 | ||
| DT-NB | 84.72 | - | - | 60.05 | ||
| DT-KNN | 91.55 | - | - | 73.11 | ||
| DT-DT | 92.73 | - | - | 75.97 | ||
| DT-SVM | 93.23 | - | - | 77.35 | ||
| SVM-NB | 30.40 | - | - | - | ||
| SVM-KNN | 25.23 | - | - | - | ||
| SVM-DT | 92.43 | - | - | 75.31 | ||
| SVM-SVM | 43.32 | - | - | - | ||
| CASAS Tulum | Back-Propagation | 88.75 | - | - | - | [ |
| SVM | 87.42 | - | - | - | ||
| DBM | 90.23 | - | - | - | ||
| CASAS Twor | Back-Propagation | 76.9 | - | - | - | |
| SVM | 73.52 | - | - | - | ||
| DBM | 78.49 | - | - | - | ||
| WISDM | KNN | 69 | 78 | - | 78 | [ |
| LDA | 40 | 34 | - | 34 | ||
| QDA | 65 | 58 | - | 58 | ||
| RF | 90 | 91 | - | 91 | ||
| DT | 77 | 77 | - | 77 | ||
| CNN | 66 | 62 | - | 60 | ||
| DAPHNET | KNN | 90 | 87 | - | 88 | |
| LDA | 91 | 83 | - | 83 | ||
| QDA | 91 | 82 | - | 82 | ||
| RF | 91 | 91 | - | 91 | ||
| DT | 91 | 83 | - | 83 | ||
| CNN | 90 | 87 | - | 87 | ||
| PAPAM | KNN | 65 | 66 | - | 66 | |
| LDA | 45 | 45 | - | 45 | ||
| QDA | 15 | 19 | - | 19 | ||
| RF | 80 | 83 | - | 83 | ||
| DT | 60 | 60 | - | 60 | ||
| CNN | 73 | 76 | - | 73 | ||
| HHAR(Phone) | KNN | 83 | 85 | - | 85 | |
| LDA | 43 | 45 | - | 45 | ||
| QDA | 40 | 50 | - | 50 | ||
| RF | 88 | 89 | - | 89 | ||
| DT | 67 | 66 | - | 66 | ||
| CNN | 84 | 84 | - | 84 | ||
| HHAR(watch) | KNN | 78 | 82 | - | 82 | |
| LDA | 54 | 52 | - | 52 | ||
| QDA | 26 | 27 | - | 27 | ||
| RF | 85 | 85 | - | 85 | ||
| DT | 69 | 69 | - | 69 | ||
| CNN | 83 | 83 | - | 83 | ||
| Mhealth | KNN | 76 | 81 | - | 81 | |
| LDA | 38 | 59 | - | 59 | ||
| QDA | 91 | 82 | - | 82 | ||
| RF | 85 | 85 | - | 85 | ||
| DT | 77 | 77 | - | 77 | ||
| CNN | 80 | 80 | - | 80 | ||
| RSSI | KNN | 91 | 91 | - | 91 | |
| LDA | 91 | 91 | - | 91 | ||
| QDA | 91 | 91 | - | 91 | ||
| RF | 91 | 91 | - | 91 | ||
| DT | 91 | 91 | - | 91 | ||
| CNN | 91 | 90 | - | 91 | ||
| CSI | KNN | 93 | 93 | - | 93 | |
| LDA | 93 | 93 | - | 93 | ||
| QDA | 92 | 92 | - | 92 | ||
| RF | 93 | 93 | - | 93 | ||
| DT | 93 | 93 | - | 93 | ||
| CNN | 92 | 92 | - | 92 | ||
| Casas Aruba | DT | 96.3 | 93.8 | 92.3 | 93 | [ |
| SVM | 88.2 | 88.3 | 87.8 | 88.1 | ||
| KNN | 89.2 | 87.8 | 85.9 | 86.8 | ||
| AdaBoost | 98 | 96 | 95.9 | 95.9 | ||
| DCNN | 95.6 | 93.9 | 95.3 | 94.6 | ||
| SisFall | SVM | 97.77 | 76.17 | 75.6 | [ | |
| Random Forest | 96.82 | 79.99 | 79.95 | |||
| KNN | 96.71 | 93.99 | 68.36 | |||
| CASAS Milan | Naive Bayes | 76.65 | [ | |||
| HMM+SVM | 77.44 | |||||
| CRF | 61.01 | |||||
| LSTM | 93.42 | |||||
| CASAS Cairo | Naive Bayes | 82.79 | ||||
| HMM+SVM | 82.41 | |||||
| CRF | 68.07 | |||||
| LSTM | 83.75 | |||||
| CASAS Kyoto 2 | Naive Bayes | 63.98 | ||||
| HMM+SVM | 65.79 | |||||
| CRF | 66.20 | |||||
| LSTM | 69.76 | |||||
| CASAS Kyoto 3 | Naive Bayes | 77.5 | ||||
| HMM+SVM | 81.67 | |||||
| CRF | 87.33 | |||||
| LSTM | 88.71 | |||||
| CASAS Kyoto 4 | Naive Bayes | 63.27 | ||||
| HMM+SVM | 60.9 | |||||
| CRF | 58.41 | |||||
| LSTM | 85.57 | |||||
Unsupervised Techniques results.
| Dataset | Technique | Metrics | References | ||||
|---|---|---|---|---|---|---|---|
| ARI | Jaccard Index | Silhouette Index | Euclidean | F1 Fisher’s Discriminant | |||
| UCI HAR SmartPhone | K-means | 0.7727 | 0.3246 | 0.4416 | [ | ||
| HAC | 0.4213 | 0.2224 | 0.5675 | ||||
| FCM | 0.8343 | 0.4052 | 0.4281 | ||||
| UCI HAR Single Chest-Mounted Accelerometer | K-means | 0.8850 | 0.6544 | 0.6935 | |||
| HAC | 0.5996 | 0.2563 | 0.6851 | ||||
| FCM | 0.9189 | 0.7230 | 0.7751 | ||||
| Nottingham Trent University | FCM | - | - | - | - | [ | |
| Chest Sensor Dataset | PM Model | 25.8% | - | [ | |||
| Wrist Sensor Dataset | 64.3% | - | |||||
| WISDM Dataset | 54% | - | |||||
| Smartphone Dataset | 85% | - | |||||
| DSAD | wavelet tensor fuzzy clustering scheme (WTFCS) | 0.8966 | - | - | - | [ | |
| UCI HAR | Spectral Clustering | 0.543 | 0.583 | [ | |||
| Single Linkage | 0.807 | 0.851 | |||||
| Ward Linkage | 0.770 | 0.810 | |||||
| Average Linkage | 0.790 | 0.871 | |||||
| K-medioids | 0.653 | 0.654 | |||||
| UCI HAR | K-means | 52.1 | [ | ||||
| K-Means 5 | 50.7 | ||||||
| Spectral Clustering | 57.8 | ||||||
| Gaussian Mixture | 49.8 | ||||||
| DBSCAN | 16.4 | ||||||
| CADL | K-means | 50.9 | |||||
| K-Means 5 | 50.5 | ||||||
| Spectral Clustering | 61.9 | ||||||
| Gaussian Mixture | 58.9 | ||||||
| DBSCAN | 13.9 | ||||||
Ensembled Learning Techniques results.
| Dataset | Technique | Metrics | References | |||
|---|---|---|---|---|---|---|
| Accuracy | Precision | Recall | F-Measure | |||
| SisFall | Decision Tree | 97.48 | - | - | - | [ |
| Ensemble | 99.51 | - | - | - | ||
| Logistic Regression | 84.87 | - | - | - | ||
| Deepnet | 99.06 | - | - | - | ||
| Cornell Activity Dataset | X-means-SVM | 98.4 | 95.0 | 95.8 | - | [ |
| TST Dataset | 92.7 | 95.6 | 91.1 | - | ||
| HHAR | Multi-task deep clustering | 67.2 | 65.3 | 65.9 | [ | |
| MobiAct | 68.3 | 69.1 | 66.8 | |||
| MobiSense | 72.5 | 71.2 | 70.7 | |||
| NTU-RGB + D | K-Means | 85.72 | - | - | - | [ |
| GMM | 87.26 | - | - | - | ||
| UCI HAR | CELearning | 96.88% | - | - | - | [ |
| UCI HAR | RF | 96.96 | 97.0 | 97.0 | 98 | [ |
| XGB | 96.2 | 96 | 96 | 96 | ||
| AdaB | 50.5 | 61 | 51 | 51 | ||
| GB | 94.53 | 95 | 95 | 95 | ||
| ANN | 92.51 | 92 | 93 | 92 | ||
| V. RNN | 90.53 | 90 | 91 | 90 | ||
| LSTM | 91.23 | 90 | 91 | 90 | ||
| DT | 94.23 | 95 | 95 | 95 | ||
| KNN | 96.59 | 97 | 97 | 97 | ||
| NB | 80.67 | 84 | 81 | 81 | ||
| Proposed Dataset | GB | 84.1 | 84.1 | 84.2 | 84.1 | [ |
| RFs | 83.9 | 83.9 | 84.1 | 83.9 | ||
| Bagging | 83 | 83 | 83.1 | 83 | ||
| XGB | 80.4 | 80.5 | 80.4 | 80.4 | ||
| AdaBoost | 77.2 | 77.3 | 77.3 | 77.3 | ||
| DT | 76.9 | 77 | 77 | 77 | ||
| MLP | 67.6 | 68.7 | 67.8 | 67.8 | ||
| LSVM | 65 | 65.7 | 65.1 | 64.9 | ||
| NLSVM | 63 | 63.3 | 63.2 | 62.8 | ||
| LR | 59.6 | 60.2 | 59.8 | 59.4 | ||
| KNNs | 58.9 | 60.1 | 59.2 | 58.9 | ||
| GNB | 56.1 | 59.4 | 55.4 | 45.2 | ||
| House A | Bernoulli NB | 78.7 | 64 | - | - | [ |
| Decision Tree | 88 | 79.4 | - | - | ||
| Logistic Regression | 81.4 | 69.2 | - | - | ||
| KNN | 75.8 | 64.9 | - | |||
| House B | Bernoulli NB | 95.9 | 79.4 | - | ||
| Decision Tree | 97.2 | 86.4 | - | |||
| Logistic Regression | 96.5 | 82.7 | - | |||
| KNN | 93.1 | 79.8 | - | |||
| UCI HAR | SVM-AdaBoost | 99.9 | 99.9 | [ | ||
| k-NN-AdaBoost | 99.43 | 99.4 | ||||
| ANN-AdaBoost | 99.33 | 99.33 | ||||
| NB-AdaBoost | 97.24 | 97.2 | ||||
| RF-AdaBoost | 99.98 | 100 | ||||
| CART-AdaBoost | 99.97 | 100 | ||||
| C4.5-AdaBoost | 99.95 | 100 | ||||
| REPTree-AdaBoost | 99.95 | 100 | ||||
| LADTree-AdaBoost | 98.84 | 98.8 | ||||
| HAR Dataset | KNN | 90.3 | [ | |||
| CART | 84.9 | |||||
| BAYES | 77 | |||||
| RF | 92.7 | |||||
| HAPT Dataset | KNN | 89.2 | ||||
| CART | 80.2 | |||||
| BAYES | 74.7 | |||||
| RF | 91 | |||||
| ET | 91.7 | |||||
| Proposed Method | 92.6 | |||||
Deep Learning Techniques results.
| Dataset | Technique | Metrics | References | |||
|---|---|---|---|---|---|---|
| Accuracy | Precision | Recall | F-Measure | |||
| Uci Har | CNN | 92.71 | 93.21 | 92.82 | 92.93 | [ |
| LSTM | 89.01 | 89.14 | 88.99 | 88.99 | ||
| BLSTM | 89.4 | 89.41 | 89.36 | 89.35 | ||
| MLP | 86.83 | 86.83 | 86.58 | 86.61 | ||
| SVM | 89.85 | 90.5 | 89.86 | 89.85 | ||
| PAMAP2 | CNN | 91.00 | 91.66 | 90.86 | 91.16 | |
| LSTM | 85.86 | 86.51 | 84.67 | 85.34 | ||
| BLSTM | 89.52 | 90.19 | 89.02 | 89.4 | ||
| MLP | 82.07 | 83.35 | 82.17 | 82.46 | ||
| SVM | 84.07 | 84.71 | 84.23 | 83.76 | ||
| Propio Infrared Images | LBP-Naive Bayes | 42.1 | - | - | - | [ |
| HOG-Naive Bayes | 77.01 | - | - | - | ||
| LBP-KNN | 53.261 | - | - | - | ||
| HOG-KNN | 83.541 | - | - | - | ||
| LBP-SVM | 62.34 | - | - | - | ||
| HOF-SVM | 85.92 | - | - | - | ||
| Uci Har | DeepConvLSTM | 94.77 | - | - | - | [ |
| CNN | 92.76 | - | - | - | ||
| Weakly Dataset | DeepConvLSTM | 92.31 | - | - | - | |
| CNN | 85.17 | - | - | - | ||
| Opportunity | HC | 85.69 | - | - | - | [ |
| CBH | 84.66 | - | - | - | ||
| CBS | 85.39 | - | - | - | ||
| AE | 83.39 | - | - | - | ||
| MLP | 86.65 | - | - | - | ||
| CNN | 87.62 | - | - | - | ||
| LSTM | 86.21 | - | - | - | ||
| Hybrid | 87.67 | - | - | - | ||
| ResNet | 87.67 | - | - | - | ||
| ARN | 90.29 | - | - | - | ||
| UniMiB-SAHR | HC | 21.96 | - | - | - | |
| CBH | 64.36 | - | - | - | ||
| CBS | 67.36 | - | - | - | ||
| AE | 68.39 | - | - | - | ||
| MLP | 74.82 | - | - | - | ||
| CNN | 73.36 | - | - | - | ||
| LSTM | 68.81 | - | - | - | ||
| Hybrid | 72.26 | - | - | - | ||
| ResNet | 75.26 | - | - | - | ||
| ARN | 76.39 | - | - | - | ||
| Uci Har | KNN | 90.74 | 91.15 | 90.28 | 90.48 | [ |
| SVM | 96.27 | 96.43 | 96.14 | 96.23 | ||
| HMM+SVM | 96.57 | 96.74 | 06.49 | 96.56 | ||
| SVM+KNN | 96.71 | 96.75 | 96.69 | 96.71 | ||
| Naive Bayes | 77.03 | 79.25 | 76.91 | 76.72 | ||
| Logistic Regression | 95.93 | 96.13 | 95.84 | 95.92 | ||
| Decision Tree | 87.34 | 87.39 | 86.95 | 86.99 | ||
| Random Forest | 92.30 | 92.4 | 92.03 | 92.14 | ||
| MLP | 95.25 | 95.49 | 95.13 | 95.25 | ||
| DNN | 96.81 | 96.95 | 96.77 | 96.83 | ||
| LSTM | 91.08 | 91.38 | 91.24 | 91.13 | ||
| CNN+LSTM | 93.08 | 93.17 | 93.10 | 93.07 | ||
| CNN+BiLSTM | 95.42 | 95.58 | 95.26 | 95.36 | ||
| Inception+ResNet | 95.76 | 96.06 | 95.63 | 95.75 | ||
| Utwente Dataset | Naive Bayes | - | - | - | 94.7 | [ |
| SVM | - | - | - | 91.6 | ||
| Deep Stacked Autoencoder | - | - | - | 97.6 | ||
| CNN-BiGRu | - | - | - | 97.8 | ||
| PAMAP2 | DeepCOnvTCN | - | - | - | 81.8 | |
| InceptionTime | - | - | - | 81.1 | ||
| CNN-BiGRu | - | - | - | 85.5 | ||
| FrailSafe dataset | CNN | 91.84 | - | - | - | [ |
| CASAS Milan | LSTM | 76.65 | - | - | - | [ |
| Bi-LSTM | 77.44 | - | - | - | ||
| Casc-LSTM | 61.01 | - | - | - | ||
| ENs2-LSTM | 93.42 | - | - | - | ||
| CASAS Cairo | LSTM | 82.79 | - | - | - | |
| Bi-LSTM | 82.41 | - | - | - | ||
| Casc-LSTM | 68.07 | - | - | - | ||
| ENs2-LSTM | 83.75 | - | - | - | ||
| CASAS Kyoto 2 | LSTM | 63.98 | - | - | - | |
| Bi-LSTM | 65.79 | - | - | - | ||
| Casc-LSTM | 66.20 | - | - | - | ||
| ENs2-LSTM | 69.76 | - | - | - | ||
| CASAS Kyoto 3 | LSTM | 77.5 | - | - | - | |
| Bi-LSTM | 81.67 | - | - | - | ||
| Casc-LSTM | 87.33 | - | - | - | ||
| ENs2-LSTM | 88.71 | - | - | - | ||
| Proposal | ANN | 89.06 | - | - | - | [ |
| SVM | 94.12 | - | - | - | ||
| DBN | 95.85 | - | - | - | ||
Reinforcement Learning Techniques results.
| Dataset | Technique | Metrics | References |
|---|---|---|---|
| Accuracy | |||
| Weizmann datasets | Spiking Neural Network | 94.44 | [ |
| KTH datasets | 92.50 | ||
| DoMSEV | Deep-Shallow | 72.9 | [ |
| Proposal | Deep Q-Network (DQN) | 83.26 | [ |
| S.Yousefi-2017 | Reinforcement Learning Agent Recurrent Neural Network with Long Short-Term Memory | 80 | [ |
| FallDeFi | 83 | ||
| UCI HAR | Reinforcement Learning + DeepConvLSTM | 98.36 | [ |
| Proposal | 79 | [ | |
| UCF-Sports | Q-learning | 95 | [ |
| UCF-101 | 85 | ||
| sub-JHMDB | 80 | ||
| MHEALTH | Cluster-Q learning | 94.5 | [ |
| PAMAP2 | 83.42 | ||
| UCI HAR | 81.32 | ||
| MARS | 85.92 | ||
| DataEgo | LRCN | 88 | [ |
| Proposal | Mask Algorithm | 96.02 | [ |
| Proposal | LSTM-Reinforcement Learning | 90.50 | [ |
| Proposal | Convolutional Autoencoder | 87.7 | [ |
Metaheuristic Learning Techniques results.
| Dataset | Technique | Metrics | References |
|---|---|---|---|
| Accuracy | |||
| Cifar-100 | L4-Banched-ActionNet + EntACS + Cub-CVM | 98.00 | [ |
| Sbharpt | Ant-Colony, NB | 98.96 | [ |
| Ucihar | Bee swarm optimization with a deep Q-network | 98.41 | [ |
| Motionsense | Binary Grey Wolf Optimization | 93.95 | [ |
| Mhealth | 96.83 | ||
| Uci Har | Genetic Algorithms-SVM | 96.43 | [ |
| Ucf50 | Genetic Algorithms-CNN | 87.5 | [ |
| Sbhar | GA-PCA | 95,71 | [ |
| Mnist | GA-CNN | 99.75 | [ |
| Cifar-100 | Genetic Algorithms-SVM | 98.00 | [ |
| Sbharpt | Genetic Algorithms-CNN | 98.96 | [ |
Transfer Learning Techniques results.
| Dataset | Technique | Metrics | References | |||
|---|---|---|---|---|---|---|
| Accuracy | Precision | Recall | F-Measure | |||
| CSI | KNN | 98.3 | - | - | - | [ |
| SVM | 98.3 | - | - | - | ||
| CNN | 99.2 | - | - | - | ||
| Opportunity | KNN+PCA | 60 | - | - | - | [ |
| GFK | 59 | - | - | - | ||
| STL | 65 | - | - | - | ||
| SA-GAN | 73 | - | - | - | ||
| USC-HAD | MMD | 80 | - | - | - | [ |
| DANN | 77 | - | - | - | ||
| WD | 72 | - | - | - | ||
| Proposal | KNN-OS | 79.84 | 85.84 | 91.88 | 88.61 | [ |
| KNN-SS | 89.64 | 94.41 | 94.76 | 94.52 | ||
| SVM-OS | 77.14 | 97.04 | 79.23 | 87.09 | ||
| SVM-SS | 87.5 | 94.39 | 92.61 | 93.27 | ||
| DT-OS | 87.5 | 94.61 | 92.16 | 93.14 | ||
| DT-SS | 91.79 | 95.19 | 96.26 | 95.71 | ||
| JDA | 86.79 | 92.71 | 93.07 | 92.89 | ||
| BDA | 91.43 | 95.9 | 95.18 | 95.51 | ||
| IPL-JPDA | 93.21 | 97.04 | 95.97 | 96.48 | ||
| KNN-OS | 79.84 | 85.84 | 91.88 | 88.61 | ||
| Wiezmann Dataset | VGG-16 MODEL | 96.95 | 97.00 | 97.00 | 97.00 | [ |
| VGG-19 MODEL | 96.54 | 97.00 | 97.00 | 96.00 | ||
| Inception-v3 Model | 95.63 | 96.00 | 96.00 | 96.00 | ||
| PAMAP2 | DeepConvLSTM | - | - | - | 93.2 | [ |
| Skoda Mini Checkpoint | - | - | - | 93 | ||
| Opportunity | PCA | 66.78 | - | - | - | [ |
| TCA | 68.43 | - | - | - | ||
| GFK | 70.87 | - | - | - | ||
| TKL | 70.21 | - | - | - | ||
| STL | 73.22 | - | - | - | ||
| TNNAR | 78.4 | - | - | - | ||
| PAMAP2 | PCA | 42.87 | - | - | - | |
| TCA | 47.21 | - | - | - | ||
| GFK | 48.09 | - | - | - | ||
| TKL | 43.32 | - | - | - | ||
| STL | 51.22 | - | - | - | ||
| TNNAR | 55.48 | - | - | - | ||
| UCI DSADS | PCA | 71.24 | - | - | - | |
| TCA | 73.47 | - | - | - | ||
| GFK | 81.23 | - | - | - | ||
| TKL | 74.26 | - | - | - | ||
| STL | 83.76 | - | - | - | ||
| TNNAR | 87.41 | - | - | - | ||
| UCI HAR | CNN-LSTM | 90.8 | - | - | - | [ |
| DT | 76.73 | . | - | - | [ | |
| RF | 71.96 | - | - | - | ||
| TB | 75.65 | - | - | - | ||
| TransAct | 86.49 | - | - | - | ||
| Mhealth | DT | 48.02 | - | - | - | |
| RF | 62.25 | - | - | - | ||
| TB | 66.48 | - | - | - | ||
| TransAct | 77.43 | - | - | - | ||
| Daily Sport | DT | 66.67 | . | . | . | |
| RF | 70.38 | . | . | . | ||
| TB | 72.86 | . | - | - | ||
| TransAct | 80.83 | - | - | - | ||
| Proposal | Without SVD (Singular Value Decomposition) | 63.13% | - | - | - | [ |
| With SVD (Singular Value Decomposition) | 43.13% | - | - | - | ||
| Transfer Accuracy | 97.5% | - | - | - | ||
| PAMAP2 | CNN | 84.89 | - | - | - | [ |
| UCI HAR | 83.16 | - | - | - | ||
| UCI HAR | kNN | 77.28 | - | - | - | [ |
| DT | 72.16 | - | - | - | ||
| DA | 77.46 | - | - | - | ||
| NB | 69.93 | - | - | - | ||
| Transfer Accuracy | 83.7 | - | - | - | ||
| UCF Sports Action dataset | VGGNet-19 | 97.13 | - | - | - | [ |
| AMASS | DeepConvLSTM | 87.46 | - | - | - | [ |
| DIP | 89.08 | - | - | - | ||
| DAR Dataset | Base CNN | 85.38 | - | - | - | [ |
| AugToAc | 91.38 | - | - | - | ||
| HDCNN | 86.85 | - | - | - | ||
| DDC | 86.67 | - | - | - | ||
| UCI HAR | CNN_LSTM | 92.13 | - | - | - | [ |
| CNN_LSTM_SENSE | 91.55 | - | - | - | ||
| LSTM | 91.28 | - | - | - | ||
| LSTM_DENSE | 91.40 | - | - | - | ||
| ISPL | CNN_LSTM | 99.06 | - | - | - | |
| CNN_LSTM_SENSE | 98.43 | - | - | - | ||
| LSTM | 96.23 | - | - | - | ||
| LSTM_DENSE | 98.11 | - | - | - | ||