| Literature DB >> 24736626 |
Carlos Medrano1, Raul Igual2, Inmaculada Plaza2, Manuel Castro3.
Abstract
Despite being a major public health problem, falls in the elderly cannot be detected efficiently yet. Many studies have used acceleration as the main input to discriminate between falls and activities of daily living (ADL). In recent years, there has been an increasing interest in using smartphones for fall detection. The most promising results have been obtained by supervised Machine Learning algorithms. However, a drawback of these approaches is that they rely on falls simulated by young or mature people, which might not represent every possible fall situation and might be different from older people's falls. Thus, we propose to tackle the problem of fall detection by applying a kind of novelty detection methods which rely only on true ADL. In this way, a fall is any abnormal movement with respect to ADL. A system based on these methods could easily adapt itself to new situations since new ADL could be recorded continuously and the system could be re-trained on the fly. The goal of this work is to explore the use of such novelty detectors by selecting one of them and by comparing it with a state-of-the-art traditional supervised method under different conditions. The data sets we have collected were recorded with smartphones. Ten volunteers simulated eight type of falls, whereas ADL were recorded while they carried the phone in their real life. Even though we have not collected data from the elderly, the data sets were suitable to check the adaptability of novelty detectors. They have been made publicly available to improve the reproducibility of our results. We have studied several novelty detection methods, selecting the nearest neighbour-based technique (NN) as the most suitable. Then, we have compared NN with the Support Vector Machine (SVM). In most situations a generic SVM outperformed an adapted NN.Entities:
Mesh:
Year: 2014 PMID: 24736626 PMCID: PMC3988107 DOI: 10.1371/journal.pone.0094811
Source DB: PubMed Journal: PLoS One ISSN: 1932-6203 Impact factor: 3.240
Figure 4Schematic summary of cross-validation conditions.
ADL and FALL represent the original data set (50 Hz, phone in pocket). Between parentheses, we add additional conditions. For instance, FALL(type = t) means the falls of a given type t. ADL-Hand bag and FALL-Hand bag are the data sets obtained while carrying the phone in a hand bag.
Figure 1Some examples of acceleration shapes obtained during falls and ADL.
Figure 2The AUC of kNN (blue points), kNN-sum (red squares) and K-means+NN (green triangles) for different values of k.
Comparison of novelty fall detectors.
| Algorithm | AUC mean(std) | SE | SP |
| kNN k = 1 | 0.9554 (0.0052) | 0.907 | 0.905 |
| kNN-sum k = 2 | 0.9548 (0.0052) | 0.913 | 0.901 |
| K-means + 1NN (K = 800) | 0.9575 (0.0056) | 0.929 | 0.890 |
| One-Class SVM | 0.9439 (0.0060) | 0.881 | 0.890 |
Comparison of 1NN with SVM in terms of AUC (mean and std).
| SVM | 1NN | |||
| Conditions applied | AUC | AUC | Difference | p-value |
| Standard 10-fold CV | 0.977 (0.010) | 0.956 (0.011) | 0.022 (0.006) | <0.01 |
| Fall type-wise CV | 0.976 (0.012) | 0.956 (0.013) | 0.020 (0.012) | <0.01 |
| Phone sampling at 25 Hz | 0.969 (0.008) | 0.946 (0.010) | 0.022 (0.007) | <0.01 |
| Phone sampling at 16.7 Hz | 0.961 (0.009) | 0.937 (0.010) | 0.024 (0.008) | <0.01 |
| Phone in hand bag | 0.899 (0.011) | 0.951 (0.007) | −0.053 (0.007) | <0.01 |
Different conditions are considered in each row. The first row is the standard cross-validation (CV). In the second row the CV is done by leaving out each time a different type of fall for testing. In the remaining rows, the validation sets for CV are taken under varying conditions. 1NN is trained and tested with data obtained under the same conditions, while SVM is trained with data obtained under “standard” conditions (50 Hz, phone in pocket).
Comparison of 1NN with SVM in terms of SE and SP.
| SVM | 1NN | |||||
| Conditions applied | SE | SP |
| SE | SP |
|
| Standard 10-fold CV | 0.954 | 0.924 | 0.939 | 0.910 | 0.903 | 0.907 |
| Fall type-wise CV | 0.953 | 0.926 | 0.939 | 0.904 | 0.915 | 0.909 |
| Phone sampling at 25 Hz | 0.930 | 0.918 | 0.924 | 0.891 | 0.901 | 0.896 |
| Phone sampling at 16.7 Hz | 0.893 | 0.919 | 0.906 | 0.895 | 0.880 | 0.887 |
| Phone in hand bag | 0.903 | 0.7912 | 0.845 | 0.910 | 0.893 | 0.902 |
Different conditions are considered in each row. The first row is the standard cross-validation (CV). In the second row the CV is done by leaving out each time a different type of fall for testing. In the remaining rows, the validation sets for CV are taken under varying conditions. 1NN is trained and tested with data obtained under the same conditions, while SVM is trained with data obtained under “standard” conditions (50 Hz, phone in pocket).
Figure 3ROC curve for SVM (blue points) and 1NN (red squares).
Comparison between generic SVM and 1NN detectors (SVMG, 1NNG) and a personalized 1NN detector (1NNP) in terms of AUC (mean and std).
| SVMG | 1NNG | 1NNP | SVMG-1NNP | 1NNP-1NNG | |||
| Person | AUC | AUC | AUC | Difference | p-value | Difference | p-value |
| Person 0 | 0.976 (0.007) | 0.929 (0.017) | 0.955 (0.013) | 0.021 (0.009) | <0.01 | 0.026 (0.008) | <0.01 |
| Person 1 | 0.986 (0.010) | 0.974 (0.012) | 0.979 (0.012) | 0.007 (0.008) | 0.014 | 0.005 (0.002) | <0.01 |
| Person 2 | 0.941 (0.007) | 0.941 (0.012) | 0.950 (0.011) | −0.009 (0.012) | 0.023 | 0.090 (0.004) | <0.01 |
| Person 3 | 0.983 (0.012) | 0.9410(0.014) | 0.965 (0.009) | 0.018 (0.011) | <0.01 | 0.024 (0.007) | <0.01 |
| Person 4 | 0.963 (0.007) | 0.954 (0.010) | 0.953 (0.012) | 0.009 (0.009) | <0.01 | −0.000 (0.004) | 0.436 |
| Person 5 | 0.921 (0.022) | 0.653 (0.053) | 0.962 (0.013) | −0.040 (0.022) | <0.01 | 0.309 (0.046) | <0.01 |
| Person 6 | 0.964 (0.014) | 0.912 (0.024) | 0.950 (0.020) | 0.014 (0.013) | <0.01 | 0.038 (0.013) | <0.01 |
| Person 7 | 0.971 (0.007) | 0.952 (0.010) | 0.965 (0.011) | 0.007 (0.007) | <0.01 | 0.013 (0.005) | <0.01 |
| Person 8 | 0.988 (0.007) | 0.948 (0.022) | 0.966 (0.019) | 0.022 (0.018) | <0.01 | 0.019 (0.011) | <0.01 |
| Person 9 | 0.988 (0.006) | 0.945 (0.012) | 0.977 (0.010) | 0.011 (0.006) | <0.01 | 0.032 (0.009) | <0.01 |
| Average | 0.968 | 0.915 | 0.962 | 0.006 | 0.047 | ||
For each person, the personalized 1NN is trained only with part of his or her own data, and tested with the remaining data. The generic SVM or 1NN in turn are trained with data from the remaining people but tested on the same validation set. This is repeated ten times for cross-validation.
Comparison between generic SVM and 1NN detectors (SVMG, 1NNG) and a personalized 1NN detector (1NNP) in terms of SE and SP.
| SVMG | 1NNG | 1NNP | |||||||
| Person | SE | SP |
| SE | SP |
| SE | SP |
|
| Person 0 | 0.908 | 0.946 | 0.927 | 0.827 | 0.871 | 0.849 | 0.867 | 0.925 | 0.895 |
| Person 1 | 0.992 | 0.923 | 0.957 | 0.983 | 0.901 | 0.941 | 0.964 | 0.945 | 0.955 |
| Person 2 | 0.861 | 0.942 | 0.900 | 0.892 | 0.894 | 0.893 | 0.932 | 0.894 | 0.913 |
| Person 3 | 0.970 | 0.929 | 0.950 | 0.904 | 0.857 | 0.880 | 0.952 | 0.901 | 0.926 |
| Person 4 | 0.877 | 0.939 | 0.907 | 0.944 | 0.878 | 0.911 | 0.952 | 0.876 | 0.913 |
| Person 5 | 0.866 | 0.784 | 0.824 | 0.804 | 0.545 | 0.662 | 0.950 | 0.909 | 0.929 |
| Person 6 | 0.961 | 0.859 | 0.909 | 0.898 | 0.831 | 0.864 | 0.950 | 0.859 | 0.903 |
| Person 7 | 0.917 | 0.965 | 0.941 | 0.919 | 0.930 | 0.924 | 0.955 | 0.930 | 0.942 |
| Person 8 | 0.961 | 0.953 | 0.957 | 0.900 | 0.884 | 0.892 | 0.953 | 0.907 | 0.930 |
| Person 9 | 0.981 | 0.925 | 0.953 | 0.932 | 0.812 | 0.870 | 0.940 | 0.912 | 0.926 |
| Average | 0.929 | 0.917 | 0.922 | 0.900 | 0.840 | 0.869 | 0.941 | 0.906 | 0.923 |
For each person, the personalized 1NN is trained only with part of his or her own data, and tested with the remaining data. The generic SVM or 1NN in turn are trained with data from the remaining people but tested on the same validation set. This is repeated ten times for cross-validation.