| Literature DB >> 27409622 |
Muhammad Farooq1, Edward Sazonov2.
Abstract
Presence of speech and motion artifacts has been shown to impact the performance of wearable sensor systems used for automatic detection of food intake. This work presents a novel wearable device which can detect food intake even when the user is physically active and/or talking. The device consists of a piezoelectric strain sensor placed on the temporalis muscle, an accelerometer, and a data acquisition module connected to the temple of eyeglasses. Data from 10 participants was collected while they performed activities including quiet sitting, talking, eating while sitting, eating while walking, and walking. Piezoelectric strain sensor and accelerometer signals were divided into non-overlapping epochs of 3 s; four features were computed for each signal. To differentiate between eating and not eating, as well as between sedentary postures and physical activity, two multiclass classification approaches are presented. The first approach used a single classifier with sensor fusion and the second approach used two-stage classification. The best results were achieved when two separate linear support vector machine (SVM) classifiers were trained for food intake and activity detection, and their results were combined using a decision tree (two-stage classification) to determine the final class. This approach resulted in an average F1-score of 99.85% and area under the curve (AUC) of 0.99 for multiclass classification. With its ability to differentiate between food intake and activity level, this device may potentially be used for tracking both energy intake and energy expenditure.Entities:
Keywords: activity monitoring; chewing; energy expenditure; energy intake; food intake monitoring; piezoelectric strain sensor; support vector machine (SVM); wearable sensor
Mesh:
Year: 2016 PMID: 27409622 PMCID: PMC4970114 DOI: 10.3390/s16071067
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1(a) Portable wearable device for monitoring of food intake and level of physical activity. The data acquisition module also has an accelerometer and Bluetooth; (b) Eyeglasses with a piezoelectric sensor and data acquisition device connected to the temple of glasses.
Figure 2Sensor signals collected during the experiment. Piezoelectric sensor signals (first row) and accelerometer signals (second row) are used to differentiate between eating and physical activities. Eating episodes were marked by participants using a pushbutton (third row).
Feature sets computed from both piezoelectric and accelerometer sensor epochs.
| No. | Feature | Description * |
|---|---|---|
| Range of values | Rng(x(i)) = Max(x(i)) − Min(x(i)) | |
| Standard deviation | STD(x(i)) = | |
| Energy | Eng(x(i)) = | |
| Waveform length | WL(x(i)) = |
* i represents epoch number, n is the n sample in the i epoch; N = L × f; N = number of samples, L = 3; duration of an epoch in second and f = 1000; sampling frequency.
Figure 3Histogram showing the distribution of piezoelectric strain sensor signal features: (a) Range of values (b) Standard deviation (c) Energy (d) Waveform length. The feature distribution shows that these features can easily provide information for separation of food intake from non-intake.
Figure 4Distribution of accelerometer sensor signal features: (a) Range of values (b) Standard deviation (c) Energy (d) Waveform length. Feature distribution shows that these features can easily provide information for separation of walking from the non-walking activity.
Decision tree rules for determining the final class label from the two-stage classifier.
| 1 | −1 | ||||
| −1 | −1 | ||||
| 1 | 1 | ||||
| −1 | 1 |
Confusion Matrix for single multiclass linear SVM classifier. Precision, Recall, and F1-score are also listed for each class of activities.
| Eating + Sitting | Sedentary | Eating + Walking | Walking | Recall | F1-Score | |
|---|---|---|---|---|---|---|
| 310 | 9 | 3 | 0 | 96.58% | 96.58% | |
| 11 | 1128 | 0 | 16 | 97.66% | 98.09% | |
| 0 | 0 | 256 | 15 | 95.20% | 94.16% | |
| 0 | 6 | 17 | 414 | 94.28% | 93.85% | |
| 96.58% | 98.52% | 93.14% | 93.42% | Mean: | 95.67% |
Confusion Matrix for multi-class classification when two stage classification is used. Precision, Recall and F1-score are also listed for each class/categories of activities.
| Eating + Sitting | Sedentary | Eating + Walking | Walking | Recall | F1-Score | |
|---|---|---|---|---|---|---|
| 322 | 0 | 0 | 0 | 100.00% | 100.00% | |
| 0 | 1155 | 0 | 0 | 100.00% | 100.00% | |
| 0 | 0 | 269 | 2 | 99.26% | 99.63% | |
| 0 | 0 | 0 | 437 | 100.00% | 99.77% | |
| 100.00% | 100.00% | 100.00% | 99.54% | Mean: | 99.85% |
Figure 5Receiver Operation Characteristics (ROC) Curves for two classification approaches. (a) ROC curves for different classes when single linear SVM model is trained; (b) ROC curve for two-stage classification. The first stage uses two linear SVM models for detection of food intake and walking, after which a simple decision tree is used to predict the final output class.
The area under the Curve (AUC) for each class. Mean AUC for each classifier was computed as the average of AUCs for all classes (Mean column).
| Classifier | Eating + Sitting | Sedentary | Eating + Walking | Walking | Mean |
|---|---|---|---|---|---|
| 0.98 | 0.98 | 0.97 | 0.96 | 0.97 | |
| 1 | 1 | 0.99 | 0.99 | 0.99 |