| Literature DB >> 34836975 |
José Antonio Santoyo-Ramón1, Eduardo Casilari-Pérez2, José Manuel Cano-García3.
Abstract
Wearable Fall Detection Systems (FDSs) have gained much research interest during last decade. In this regard, Machine Learning (ML) classifiers have shown great efficiency in discriminating falls and conventional movements or Activities of Daily Living (ADLs) based on the analysis of the signals captured by transportable inertial sensors. Due to the intrinsic difficulties of training and testing this type of detectors in realistic scenarios and with their target audience (older adults), FDSs are normally benchmarked against a predefined set of ADLs and emulated falls executed by volunteers in a controlled environment. In most studies, however, samples from the same experimental subjects are used to both train and evaluate the FDSs. In this work, we investigate the performance of ML-based FDS systems when the test subjects have physical characteristics (weight, height, body mass index, age, gender) different from those of the users considered for the test phase. The results seem to point out that certain divergences (weight, height) of the users of both subsets (training ad test) may hamper the effectiveness of the classifiers (a reduction of up 20% in sensitivity and of up to 5% in specificity is reported). However, it is shown that the typology of the activities included in these subgroups has much greater relevance for the discrimination capability of the classifiers (with specificity losses of up to 95% if the activity types for training and testing strongly diverge).Entities:
Mesh:
Year: 2021 PMID: 34836975 PMCID: PMC8626458 DOI: 10.1038/s41598-021-02537-z
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.379
Basic data of the repositories employed for the analysis.
| Dataset | Source | Number of types of ADLs/falls | Number of samples (ADLs/falls) | Duration of the samples (s) | Sampling rate (Hz) | Accelerometer range (g) |
|---|---|---|---|---|---|---|
| DOFDA | [ | 5/13 | 432 (120/312) | [1.96–17.262] s | 33 | ± 16 |
| Erciyes University | [ | 16/20 | 3302 (1476/1826) | [8.36–37.76] s | 25 | ± 16 |
| SisFall | [ | 19/15 | 4505 (2707/1798) | [9.99–179.99] s | 200 | ± 16 |
| UMAFall | [ | 12/3 | 746 (538/208) | 15 s (all samples) | 20 | ± 16 |
| UP-Fall | [ | 6/5 | 559 (304/255) | [9.409–59.979] s | 14 | ± 8 |
Characteristics of the experimental subjects in the employed datasets.
| Dataset | Number of subjects (females/males) | Age (years) | Weight (kg) | Height (cm) |
|---|---|---|---|---|
| DOFDA | 8 (2/6) | [22–29] | [60–94] | [173–187] |
| Erciyes University | 17 (7/10) | [19–27] | [47–92] | [157–184] |
| SisFall | 38 (19/19) | [19–75] | [41.5–102] | [149–183] |
| UMAFall | 19 (8/11) | [18–68] | [50–97] | [156–193] |
| UP-Fall | 17 (8/9) | [18–24] | [53–99] | [157–175] |
Performance metrics of the four best performing classifiers using fivefold cross validation and a ‘fair’ distribution of the samples between the training and testing subsets.
| Dataset | Features | Algorithm and hyperparameters | Se (%) | Sp (%) | |
|---|---|---|---|---|---|
| DOFDA | |||||
| HCTSA | SVM (linear kernel) | 99.01 | 98.33 | 98.65 ± 1.77% | |
| Own selection | KNN (Euclidean, 10 neighbors) | 98.02 | 98.18 | 98.08 ± 2.08% | |
| HCTSA | SVM (quadratic kernel) | 99.34 | 96.67 | 97.97 ± 2.44% | |
| Erciyes | |||||
| Own selection | SVM (medium gaussian kernel) | 99.34 | 98.98 | 99.16 ± 0.17% | |
| Own selection | KNN (cosine, 5 neighbors) | 99.07 | 99.05 | 99.06 ± 0.12% | |
| Own selection | KNN (Minkowski, 5 neighbors) | 99.45 | 98.64 | 99.04 ± 0.12% | |
| SisFall | |||||
| HCTSA | SVM (quadratic kernel) | 99.78 | 99.96 | 99.87 ± 0.19% | |
| HCTSA | SVM (medium gaussian kernel) | 99.11 | 99.96 | 99.54 ± 0.12% | |
| HCTSA | DT (Fine) | 98.89 | 99.96 | 99.42 ± 0.23% | |
| UMAFall | |||||
| Own selection | DT (Coarse model) | 98.38 | 98.99 | 98.67 ± 1.93% | |
| Own selection | SVM (medium gaussian kernel) | 97.87 | 99.24 | 98.55 ± 0.55% | |
| Own selection | KNN (Euclidean, 5 neighbors) | 98.93 | 97.97 | 98.45 ± 1.10% | |
| UP-Fall | Own selection | 99.59 | 98.02 | 98.80 ± 1.31% | |
| Own selection | SVM (medium gaussian kernel) | 98.78 | 98.82 | 98.79 ± 1.32% | |
| Own selection | KNN (Euclidean, 10 neighbors) | 99.18 | 97.23 | 98.20 ± 1.31% | |
| Own selection | KNN (Euclidean, 5 neighbors) | 98.78 | 97.62 | 98.19 ± 1.65% |
aThe last column includes the standard deviation of the measurement of for the five-fold tests.
Performance metrics of the best performing classifier (‘fair’ case) when the weight is used as a criterion to select the subjects of the training subset.
| Dataset | Features and algorithm | Subjects included in the training subset | Se (%) | Sp (%) | |
|---|---|---|---|---|---|
| DOFDA | HCTSA features Naive Bayes (Gaussian) | Random selection of users | 97.38 | 100.00 | 98.67 |
| Subjects (80%) with highest weight | 94.81 | 100.00 | 97.37 | ||
| Subjects (80%) with lowest weight | 93.51 | 100.00 | 96.70 | ||
| Erciyes | Own selection of features SVM (quadratic kernel) | Subjects (80%) with highest weight | 100.00 | 100.00 | 100.00 |
| Subjects (80%) with lowest weight | 99.41 | 97.06 | 98.23 | ||
| Random selection of users | 97.83 | 98.43 | 98.12 | ||
| SisFall | HCTSA features SVM (cubic kernel) | Subjects (80%) with highest weight | 100.00 | 100.00 | 100.00 |
| Random selection of users | 99.74 | 99.96 | 99.85 | ||
| Subjects (80%) with lowest weight | 85.33 | 100.00 | 92.38 | ||
| UMAFall | Own selection of features KNN (Euclidean. 10 neighbors) | ||||
| Subjects (80%) with lowest weight | 100.00 | 95.38 | 97.67 | ||
| Random selection of users | 98.28 | 97.05 | 97.66 | ||
| Subjects (80%) with highest weight | 91.55 | 98.68 | 95.05 | ||
| UP-Fall | Own selection of features SVM (linear kernel) | Subjects (80%) with highest weight | 100.00 | 100.00 | 100.00 |
| Subjects (80%) with lowest weight | 100.00 | 97.56 | 98.77 | ||
| Random selection of users | 99.65 | 97.56 | 98.60 |
Performance metrics of the best performing classifier (‘fair’ case) when the height is used as a criterion to select the subjects of the training subset.
| Dataset | Features and algorithm | Subjects included in the training subset | Se (%) | Sp (%) | |
|---|---|---|---|---|---|
| DOFDA | HCTSA features Naive Bayes (Gaussian) | Tallest subjects (80%) | 98.65 | 100.00 | 99.32 |
| Random selection of users | 97.38 | 100.00 | 98.67 | ||
| Shortest subjects (80%) | 93.59 | 100.00 | 96.74 | ||
| Erciyes | Own selection of features SVM (quadratic kernel) | Tallest subjects (80%) | 100.00 | 100.00 | 100.00 |
| Random selection of users | 97.83 | 98.43 | 98.12 | ||
| Shortest subjects (80%) | 91.22 | 93.73 | 92.46 | ||
| SisFall | HCTSA features SVM (cubic kernel) | Tallest subjects (80%) | 100.00 | 100.00 | 100.00 |
| Random selection of users | 99.74 | 99.96 | 99.85 | ||
| Shortest subjects (80%) | 99.78 | 99.83 | 99.80 | ||
| UMAFall | Own selection of features KNN (Euclidean. 10 neighbors) | ||||
| Shortest subjects (80%) | 100.00 | 96.05 | 98.01 | ||
| Random selection of users | 98.28 | 97.05 | 97.66 | ||
| Tallest subjects (80%) | 77.78 | 96.67 | 86.71 | ||
| UP-Fall | Own selection of features SVM (linear kernel) | Tallest subjects (80%) | 100.00 | 97.83 | 98.91 |
| Shortest subjects (80%) | 100.00 | 97.67 | 98.83 | ||
| Random selection of users | 99.65 | 97.56 | 98.60 |
Performance metrics of the best performing classifier (‘fair’ case) when the Body Mass Index (BMI) is used as a criterion to select the subjects of the training subset.
| Dataset | Features and algorithm | Subjects included in the training subset | Se (%) | Sp (%) | |
|---|---|---|---|---|---|
| DOFDA | HCTSA features Naive Bayes (Gaussian) | Subjects (80%) with lowest BMI | 100.00 | 100.00 | 100.00 |
| Random selection of users | 97.38 | 100.00 | 98.67 | ||
| Subjects (80%) with highest BMI | 94.81 | 100.00 | 97.37 | ||
| Erciyes | Own selection of features SVM (quadratic kernel) | ||||
| Random selection of users | 97.83 | 98.43 | 98.12 | ||
| Subjects (80%) with lowest BMI | 98.75 | 100.00 | 99.37 | ||
| Subjects (80%) with highest BMI | 99.36 | 100.00 | 99.68 | ||
| SisFall | HCTSA features SVM (cubic kernel) | ||||
| Random selection of users | 99.74 | 99.96 | 99.85 | ||
| Subjects (80%) with highest BMI | 99.83 | 99.68 | 99.76 | ||
| Subjects (80%) with lowest BMI | 98.22 | 99.62 | 98.92 | ||
| UMAFall | Own selection of features KNN (Euclidean. 10 neighbors) | ||||
| Subjects (80%) with lowest BMI | 100.00 | 95.77 | 97.86 | ||
| Random selection of users | 98.28 | 97.05 | 97.66 | ||
| Subjects (80%) with highest BMI | 94.38 | 98.25 | 96.29 | ||
| UP-Fall | Own selection of features SVM (linear kernel) | Subjects (80%) with highest BMI | 100.00 | 100.00 | 100.00 |
| Subjects (80%) with lowest BMI | 100.00 | 97.62 | 98.80 | ||
| Random selection of users | 99.65 | 97.56 | 98.60 |
Performance metrics of the best performing classifier (‘fair’ case) when the age is used as a criterion to select the subjects of the training subset.
| Dataset | Features and algorithm | Subjects included in the training subset | Se (%) | Sp (%) | |
|---|---|---|---|---|---|
| DOFDA | HCTSA features Naive Bayes (Gaussian) | Youngest subjects (80%) | 100.00% | 100.00 | 100.00% |
| Oldest subjects (80%) | 100.00% | 100.00 | 100.00% | ||
| Random selection of users | 97.38% | 100.00 | 98.67% | ||
| Erciyes | Own selection of features SVM (quadratic kernel) | ||||
| Youngest subjects (80%) | 98.74% | 100.00 | 99.37% | ||
| Oldest subjects (80%) | 98.07% | 100.00 | 99.03% | ||
| Random selection of users | 97.83% | 98.43 | 98.12% | ||
| SisFall | HCTSA features SVM (cubic kernel) | Youngest subjects (80%) | n.c. | 99.56 | n.c. |
| Oldest subjects (80%) | 100.00% | 100.00 | 100.00% | ||
| Random selection of users | 99.74% | 99.96 | 99.85% | ||
| Subjects older than 50 | 98.03% | 98.66 | 98.34% | ||
| Subjects younger than 50 | 30.67% | 99.32 | 55.19% | ||
| UMAFall | Own selection of features KNN (Euclidean. 10 neighbors) | Youngest subjects (80%) | n.c. | 100.00 | n.c. |
| Oldest subjects (80%) | 100.00% | 100.00 | 100.00% | ||
| Random selection of users | 98.28% | 97.05 | 97.66% | ||
| UP-Fall | Own selection of features SVM (linear kernel) | Oldest subjects (80%) | 100.00% | 97.67 | 98.83% |
| Random selection of users | 99.65% | 97.56 | 98.60% | ||
| Youngest subjects (80%) | 97.62% | 97.83 | 97.72% |
n.c. not computable.
Performance metrics of the best performing classifier (‘fair’ case) when the gender is used as a criterion to select the subjects of the training subset.
| Dataset | Features and algorithm | Subjects included in the training subset | Se (%) | Sp (%) | |
|---|---|---|---|---|---|
| DOFDA | HCTSA features Naive Bayes (Gaussian) | Male subjects (testing with females) | 100.00 | 100.00 | 100.00 |
| Random selection of users | 97.38 | 100.00 | 98.67 | ||
| Female subjects (testing with males) | 95.26 | 100.00 | 97.60 | ||
| Erciyes | Own selection of features SVM (quadratic kernel) | ||||
| Random selection of users | 97.83 | 98.43 | 98.12 | ||
| Female subjects (testing with males) | 97.29 | 98.06 | 97.68 | ||
| Male subjects (testing with females) | 96.53 | 98.15 | 97.34 | ||
| SisFall | HCTSA features SVM (cubic kernel) | Male subjects (testing with females) | 100.00 | 99.92 | 99.96 |
| Random selection of users | 99.74 | 99.96 | 99.85 | ||
| Female subjects (testing with males) | 99.00 | 99.85 | 99.42 | ||
| UMAFall | Own selection of features KNN (Euclidean. 10 neighbors) | ||||
| Random selection of users | 98.28 | 97.05 | 97.66 | ||
| Male subjects (testing with females) | 95.35 | 98.02 | 96.68 | ||
| Female subjects (testing with males) | 97.93 | 91.81 | 94.82 | ||
| UP-Fall | Own selection of features SVM (linear kernel) | ||||
| Male subjects (testing with females) | 99.10 | 98.31 | 98.70 | ||
| Random selection of users | 99.65 | 97.56 | 98.60 | ||
| Female subjects (testing with males) | 98.51 | 95.56 | 97.02 |
Performance metrics of the best performing classifier (‘fair’ case) when different categories of ADL are used in the training and the testing subsets.
| Dataset | Features and algorithm | ADL categories used for | Se (%) | Sp (%) | ||
|---|---|---|---|---|---|---|
| Training | Test | |||||
| DOFDA | HCTSA features Naïve Bayes (Gaussian) | |||||
| Basic ADLs | Standard ADLs | 100.00 | 68.00 | 82.46 | ||
| Standard ADLs | Basic ADLs | 100.00 | 50.00 | 70.71 | ||
| Erciyes | Own selection of features SVM (quadratic kernel) | |||||
| All but standard ADLs | Standard ADLs | 99.34 | 98.90 | 99.12 | ||
| All but basic ADLs | Basic ADLs | 99.34 | 97.22 | 98.28 | ||
| All but sporting ADLs | Sporting ADLs | 98.68 | 95.65 | 97.15 | ||
| All but ‘Near Falls’ | Near Falls | 100.00 | 92.39 | 96.12 | ||
| SisFall | HCTSA SVM (cubic kernel) | |||||
| All but basic ADLs | Basic ADLs | 99.83 | 92.96 | 96.34 | ||
| All but sporting ADLs | Sporting ADLs | 99.67 | 84.46 | 91.75 | ||
| All but standard ADLs | Standard ADLs | 99.67 | 72.34 | 84.91 | ||
| UMAFall | Own selection of features KNN (Euclidean. 10 neighbors) | |||||
| All but basic ADLs | Basic ADLs | 100.00 | 93.84 | 96.87 | ||
| Standard ADLs | Standard ADLs | 95.16 | 97.87 | 96.51 | ||
| All but sporting ADLs | Sporting ADLs | 100.00 | 1.82 | 13.48 | ||
| UP-Fall | Own selection of features SVM (linear kernel) | |||||
| All but basic ADLs | Basic ADLs | 100.00 | 100.00 | 100.00 | ||
| Standard ADLs | Standard ADLs | 98.77 | 95.93 | 97.34 | ||
| All but sporting ADLs | Sporting ADLs | 100.00 | 2.17 | 14.74 | ||