| Literature DB >> 33266484 |
Hsiang-Yun Huang1, Chia-Yeh Hsieh1, Kai-Chun Liu2, Steen Jun-Ping Hsu3, Chia-Tai Chan1.
Abstract
Fluid intake is important for people to maintain body fluid homeostasis. Inadequate fluid intake leads to negative health consequences, such as headache, dizziness and urolithiasis. However, people in busy lifestyles usually forget to drink sufficient water and neglect the importance of fluid intake. Fluid intake management is important to assist people in adopting individual drinking behaviors. This work aims to propose a fluid intake monitoring system with a wearable inertial sensor using a hierarchical approach to detect drinking activities, recognize sip gestures and estimate fluid intake amount. Additionally, container-dependent amount estimation models are developed due to the influence of containers on fluid intake amount. The proposed fluid intake monitoring system could achieve 94.42% accuracy, 90.17% sensitivity, and 40.11% mean absolute percentage error (MAPE) for drinking detection, gesture spotting and amount estimation, respectively. Particularly, MAPE of amount estimation is improved approximately 10% compared to the typical approaches. The results have demonstrated the feasibility and the effectiveness of the proposed fluid intake monitoring system.Entities:
Keywords: drinking activity recognition; drinking amount estimation; fluid intake monitoring; wearable inertial sensor
Year: 2020 PMID: 33266484 PMCID: PMC7700234 DOI: 10.3390/s20226682
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1The system architecture of the proposed fluid intake monitoring system.
Figure 2The orientation and position of a sensor in the experiment. (a) The orientation of an OPAL sensor. (b) The sensor worn on the right wrist. (c) The initial position of the sensor.
Figure 3The performed activity sequence in the experiments: answer a phone call (A) → fluid intake with a can (FI1) → comb hair (C) → fluid intake with a bottle (FI2) → eat with hands (H) → fluid intake with a handleless mug (FI3) → eat with a spoon (S) → fluid intake with a handled mug (FI4) → answer a phone call (A).
Figure 4An example of the collected signal through the accelerometer (Acc) and gyroscope (Gyro) by a subject. The colored bar is the ground truth of activities, including answering a phone call (A), fluid intake with a can (FI1), combing hair (C), fluid intake with a bottle (FI2), eating with hands (H), fluid intake with a handleless mug (FI3), eating with a spoon (S), fluid intake with a handled mug (FI4).
The list of features extracted for drinking detection and gesture spotting.
| Features | Description |
|---|---|
| Mean of | |
| Standard deviation of | |
| Variance of | |
| Maximum of | |
| Minimum of | |
| Range of | |
| Kurtosis of | |
| Skewness of |
Figure 5An example of detection results after rule-based modification, including fragment revision and duration threshold.
Figure 6An example of fragment revision to revise one or two continued fragments that are different to neighbors. D represents drinking activities and O represents other activities.
The list of features extracted for amount estimation.
| Features | Description |
|---|---|
| Mean of | |
| Standard deviation of | |
| Variance of | |
| Maximum of | |
| Minimum of | |
| Range of | |
| Kurtosis of | |
| Skewness of | |
|
| Duration of the sip gesture |
The best performance of drinking detection using different machine learning models.
| Machine Learning Model | Window Size (Samples) | Overlap (%) | Sensitivity (%) | Precision (%) | Specificity (%) | Accuracy (%) |
|---|---|---|---|---|---|---|
| ADA | 160 | 50 | 86.06 | 95.50 | 98.08 | 94.42 |
| DT | 128 | 50 | 80.83 | 95.05 | 97.91 | 92.79 |
| RF | 96 | 50 | 81.87 | 95.96 | 98.29 | 93.34 |
| NB | 256 | 50 | 92.20 | 60.28 | 69.75 | 76.76 |
| 128 | 25 | 84.87 | 94.29 | 97.45 | 93.68 | |
| SVM | 224 | 50 | 83.17 | 91.07 | 96.02 | 92.14 |
ADA: AdaBoost; DT: decision tree; RF: random forest; NB: Naïve Bayes; k-NN: k-nearest neighbors; SVM: support vector machine.
Figure 7The detection performance of (a) sensitivity, (b) precision, (c) specificity and (d) accuracy of using different machine learning models and sliding window techniques with 50% overlapping.
Figure 8The recognition performance of (a) overall sensitivity and (b) overall precision of all gestures, and (c) sensitivity and (d) precision of sip gesture using different machine learning models and sliding window techniques.
The best performance of gesture spotting using different machine learning models.
| Machine Learning Model | Window Size (Samples) | Overlap (%) | Metric | Gesture | Overall | ||||
|---|---|---|---|---|---|---|---|---|---|
| Fetch | Lift | Sip | Drop | Release | |||||
| ADA | 16 | 50 | Sensitivity (%) | 83.26 | 89.74 | 93.10 | 90.62 | 76.58 | 86.66 |
| Precision (%) | 85.64 | 91.55 | 93.83 | 89.88 | 85.26 | 89.23 | |||
| DT | 16 | 25 | Sensitivity (%) | 89.72 | 84.08 | 92.24 | 86.98 | 80.79 | 86.76 |
| Precision (%) | 84.03 | 91.92 | 94.66 | 91.61 | 87.53 | 89.95 | |||
| RF | 16 | 50 | Sensitivity (%) | 89.35 | 88.70 | 94.75 | 90.61 | 87.44 | 90.17 |
| Precision (%) | 91.63 | 95.27 | 95.35 | 93.98 | 87.80 | 92.80 | |||
| NB | 16 | 25 | Sensitivity (%) | 57.35 | 77.01 | 90.46 | 88.46 | 54.78 | 73.61 |
| Precision (%) | 71.02 | 90.93 | 88.23 | 64.08 | 71.51 | 77.15 | |||
| 16 | 50 | Sensitivity (%) | 78.10 | 83.04 | 95.69 | 85.60 | 71.58 | 82.80 | |
| Precision (%) | 84.53 | 86.19 | 85.28 | 82.05 | 86.04 | 84.82 | |||
| SVM | 16 | 25 | Sensitivity (%) | 76.87 | 87.14 | 96.28 | 90.62 | 75.80 | 85.34 |
| Precision (%) | 83.30 | 95.64 | 94.58 | 92.70 | 78.41 | 88.93 | |||
ADA: AdaBoost; DT: decision tree; RF: random forest; NB: Naïve Bayes; k-NN: k-nearest neighbors; SVM: support vector machine.
The performance of different regression models for amount estimation (Unit: %).
| Regression Model | Container-Independent | Container-Dependent | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|
| Can | Bottle | Handleless Mug | Handled Mug | |||||||
| MPE | MAPE | MPE | MAPE | MPE | MAPE | MPE | MAPE | MPE | MAPE | |
| Linear | 28.96 | 49.53 | 27.73 | 48.51 | 20.65 | 44.22 | 24.12 | 43.16 | 24.44 | 44.53 |
| Gaussian | 15.72 | 44.45 | 18.58 | 45.80 | 23.13 | 44.20 | 11.19 | 38.66 | 11.64 | 41.10 |
| SVM-linear 1 | 12.68 | 40.06 | 5.65 | 29.53 | 9.65 | 34.28 | 7.14 | 38.94 | 16.28 | 46.69 |
| SVM-Poly 2 | 24.51 | 47.75 | 24.97 | 47.49 | 22.24 | 45.45 | 19.26 | 41.14 | 19.91 | 42.74 |
| SVM-RBF 3 | 20.66 | 45.93 | 18.86 | 44.25 | 14.01 | 38.92 | 14.93 | 40.86 | 19.96 | 45.80 |
1 SVM-linear: SVM regression model with linear kernel function; 2 SVM-Poly: SVM regression model with polynomial kernel function; 3 SVM-RBF: SVM regression model with RBF kernel function.
The performance of different situations for amount estimation (Unit: %).
| Situation | Container-Independent | Container-Dependent | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|
| Can | Bottle | Handleless Mug | Handled Mug | |||||||
| MPE | MAPE | MPE | MAPE | MPE | MPE | MAPE | MPE | MPE | MAPE | |
| (1) Drinking Activity | −34.85 | 65.86 | −27.24 | 51.59 | −0.82 | 50.76 | −35.89 | 69.09 | −1.30 | 55.51 |
| (2) Sip Gesture | −12.34 | 40.11 | −29.09 | 47.28 | −8.41 | 36.52 | −5.90 | 40.77 | −8.17 | 39.87 |