| Literature DB >> 30720749 |
Kristina Yordanova1,2, Stefan Lüdtke3, Samuel Whitehouse4,5, Frank Krüger6, Adeline Paiement7, Majid Mirmehdi8, Ian Craddock9, Thomas Kirste10.
Abstract
Wellbeing is often affected by health-related conditions. Among them are nutrition-related health conditions, which can significantly decrease the quality of life. We envision a system that monitors the kitchen activities of patients and that based on the detected eating behaviour could provide clinicians with indicators for improving a patient's health. To be successful, such system has to reason about the person's actions and goals. To address this problem, we introduce a symbolic behaviour recognition approach, called Computational Causal Behaviour Models (CCBM). CCBM combines symbolic representation of person's behaviour with probabilistic inference to reason about one's actions, the type of meal being prepared, and its potential health impact. To evaluate the approach, we use a cooking dataset of unscripted kitchen activities, which contains data from various sensors in a real kitchen. The results show that the approach is able to reason about the person's cooking actions. It is also able to recognise the goal in terms of type of prepared meal and whether it is healthy. Furthermore, we compare CCBM to state-of-the-art approaches such as Hidden Markov Models (HMM) and decision trees (DT). The results show that our approach performs comparable to the HMM and DT when used for activity recognition. It outperformed the HMM for goal recognition of the type of meal with median accuracy of 1 compared to median accuracy of 0.12 when applying the HMM. Our approach also outperformed the HMM for recognising whether a meal is healthy with a median accuracy of 1 compared to median accuracy of 0.5 with the HMM.Entities:
Keywords: activity recognition; behaviour monitoring; goal recognition; plan recognition; probabilistic models; sensor-based reasoning; symbolic models
Mesh:
Year: 2019 PMID: 30720749 PMCID: PMC6387167 DOI: 10.3390/s19030646
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1Graphical representation of three different types of classifier, where X represents a hidden state and Y an observation that is used to conclude information about X: (a) discriminative classifier; (b) generative classifier without temporal knowledge; and (c) generative classifier with temporal knowledge (figure adapted from [16]).
Existing CSSMs applied to activity and goal recognition problems.
| Approach | Plan Rec. | Durations | Action Sel. | Probability | Noise | Latent Infinity | Simulation | Multiple Goals | Unscripted Scenario |
|---|---|---|---|---|---|---|---|---|---|
| [ | ■ | □ | ■ | ■ | □ | ■ | no | □ | □ |
| [ | ■ | □ | ■ | ■ | □ | ■ | yes | ■ | □ |
| [ | □ | ■ | ■ | ■ | ■ | ■ | no | □ | □ |
| [ | ■ | ■ | ■ | ■ | ■ | ■ | no | □ | □ |
| [ | ■ | ■ | ■ | ■ | ■ | ■ | no | □ | □ |
| [ | ■ | □ | ■ | ■ | □ | ■ | yes | □ | □ |
| [ | ■ | □ | ■ | ■ | ■ | ■ | yes | □ | □ |
| [ | ■ | □ | ■ | ■ | □ | ■ | yes | ■ | □ |
| [ | ■ | □ | ■ | ■ | □ | ■ | yes | ■ | □ |
□ feature not included; ■ feature included
Figure 2Elements of a Computational Causal Behaviour Model (CCBM).
Figure 3Example rule for the execution of the action “move”.
Figure 4Example definition of types and their concrete objects for the cooking problem.
Figure 5DBN structure of a CCBM model. Adapted from [12].
Figure 6The SPHERE house at the University of Bristol and the kitchen setup.
Types of meal and length of execution sequence in a dataset. “Number of Actions” gives the discrete actions required to describe the sequence (i.e., it gives the number of actions executed during the task). “Time” gives the duration of the recording in time steps. Time steps were calculated using a sliding window over the data, which was originally in milliseconds (see Section 4.2). “Meal” gives the eventual result of the food preparation.
| Dataset | # Actions | Time | Meal |
|---|---|---|---|
| D1 | 153 | 6502 | pasta (healthy), coffee (unhealthy), tea (healthy) |
| D2 | 13 | 602 | pasta (healthy) |
| D3 | 18 | 259 | salad (healthy) |
| D4 | 112 | 3348 | chicken (healthy) |
| D5 | 45 | 549 | toast (unhealthy), coffee (unhealthy) |
| D6 | 8 | 48 | juice (healthy) |
| D7 | 56 | 805 | toast (unhealthy) |
| D8 | 21 | 1105 | potato (healthy) |
| D9 | 29 | 700 | rice (healthy) |
| D10 | 61 | 613 | toast (unhealthy), water (healthy), tea (healthy) |
| D11 | 85 | 4398 | cookies (unhealthy) |
| D12 | 199 | 3084 | ready meal (unhealthy), pasta (healthy) |
| D13 | 21 | 865 | pasta (healthy) |
| D14 | 40 | 1754 | salad (healthy) |
| D15 | 72 | 1247 | pasta (healthy) |
The actions schema for the ontology.
| 1) ( | 5) ( |
| 2) ( | 6) ( |
| 3) ( | 7) ( |
| 4) ( | 8) ( |
Object sets in the ontology.
|
| chicken, coffee, cookies, juice, pasta, potato, readymeal, rice, salad, snack, tea, toast, water, other |
|
| ingredients, tools |
|
| kitchen, study |
Figure 7The relevant elements in the environment represented as hierarchy. Rectangles show objects; ellipses describe the object types; and arrows indicate the hierarchy or “is-a” relation (the arrow points to the father class). Figure adapted from [38].
Excerpt of the annotation for run D1. Time here is given in milliseconds.
| Time | Label |
|---|---|
| 1 | (unknown) |
| 3401 | (move study kitchen) |
| 7601 | (unknown) |
| 10,401 | (prepare coffee) |
| 31,101 | (unknown) |
| 34,901 | (clean) |
| 47,301 | (unknown) |
| 52,001 | (get tools pasta) |
| 68,001 | (get ingredients pasta) |
| 86,301 | (prepare pasta) |
| 202,751 | (get tools pasta) |
| 221,851 | (get ingredients pasta) |
| 228,001 | (prepare pasta) |
Parameters for the different models.
| Parameters |
|
|
|---|---|---|
| Action classes | 8 | 8 |
| Ground actions | 92 | 10–28 |
| States | 450,144 | 40–1288 |
| Valid plans | 21,889 393 | 162–15,689 |
Figure 8Frequency of the durations of some actions in the dataset.
Figure 9The HMM used for activity recognition. Each state represents an action class. Thicker lines indicate higher transition probabilities.
Figure 10Mean accuracy with and without a given feature: (Left) the accuracy for all feature combinations without the camera features and using the first run (D1) for training and the rest for testing; and (Right) the accuracy of all feature combinations including the camera features and using leave-one-out cross validation.
Accuracies for the 10 worst and 10 best sensor combinations without the camera features.
|
| |
|
|
|
| fridge, drawer middle, drawer bottom, humidity, movement | 0.2688 |
| fridge, drawer middle, drawer bottom, humidity, movement, water cold | 0.2691 |
| fridge, drawer bottom, humidity, movement, water cold | 0.2692 |
| fridge, drawer bottom, humidity, movement | 0.2692 |
| fridge, cupboard top left, humidity, movement | 0.2694 |
| fridge, cupboard top left, drawer middle, humidity, movement | 0.2694 |
| fridge, humidity, movement, water cold | 0.2695 |
| fridge, drawer middle, humidity, movement, water cold | 0.2695 |
| fridge, cupboard sink, humidity, movement, water cold | 0.2695 |
| fridge, draw middle, humidity, movement | 0.2695 |
|
| |
|
|
|
| drawer bottom, cupboard sink, water hot, water cold | 0.4307 |
| drawer middle, drawer bottom, water hot, water cold | 0.4308 |
| cupboard top left, drawer middle, drawer bottom, water hot, water cold | 0.4308 |
| drawer middle, drawer bottom, cupboard top right, water hot, water cold | 0.4308 |
| fridge, drawer bottom, movement, water hot, water cold | 0.4325 |
| fridge, movement, water hot, water cold | 0.4330 |
| fridge, cupboard top left, movement, water hot, water cold | 0.4330 |
| fridge, draw middle, movement, water hot, water cold | 0.4330 |
| fridge, cupboard sink, movement, water hot, water cold | 0.4330 |
| fridge, cupboard top right, movement, water hot, water cold | 0.4332 |
Accuracies for the 10 worst and 10 best sensor combinations with the camera features.
|
| |
|
|
|
| fridge, cupboard top left, drawer bottom, cupboard top right, humidity, xCoord | 0.2199 |
| fridge, cupboard top left, drawer bottom, cupboard sink, humidity, xCoord | 0.2199 |
| fridge, cupboard top left, drawer middle, cupboard sink, humidity, movement, xCoord | 0.2194 |
| fridge, cupboard top left, drawer middle, humidity, movement, xCoord | 0.2189 |
| fridge, cupboard top left, cupboard sink, humidity, movement, xCoord | 0.2170 |
| fridge, cupboard top left, drawer middle, cupboard top right, cupboard sink, humidity, xCoord | 0.2167 |
| fridge, cupboard top left, drawer middle, cupboard top right, humidity, xCoord | 0.2162 |
| fridge, cupboard top left, drawer middle, cupboard sink, humidity, xCoord | 0.2162 |
| fridge, cupboard top left, drawer middle, humidity, xCoord | 0.2158 |
| fridge, cupboard top left, cupboard top right, cupboard sink, humidity, xCoord | 0.2149 |
|
| |
|
|
|
| kettle, cupboard top left, drawer bottom, temperature, movement | 0.4911 |
| kettle, cupboard top left, drawer bottom, cupboard top right, temperature, movement | 0.4911 |
| kettle, cupboard top left, drawer bottom, cupboard sink, temperature, movement | 0.4911 |
| kettle, cupboard top left, drawer bottom, cupboard top right, cupboard sink, temperature, movement | 0.4911 |
| kettle, cupboard top left, cupboard sink, temperature, movement | 0.4902 |
| kettle, cupboard top left, cupboard top right, cupboard sink, temperature, movement | 0.4901 |
| kettle, cupboard top left, drawer middle, drawer bottom, cupboard sink, temperature, movement | 0.4901 |
| kettle, cupboard top left, drawer middle, drawer bottom, cupboard top right, cupboard sink, temperature, movement | 0.4901 |
| kettle, drawer bottom, cupboard sink, temperature, movement | 0.4892 |
| kettle, drawer bottom, cupboard top right, cupboard sink, temperature, movement | 0.4892 |
Figure 11Activity Recognition results. OM-o refers to the optimistic observation model, OM-p to the pessimistic observation model, dt is decision tree, hmmg is the general HMM, hmms is the specialised HMM, CCBM.s is the specialised CCBM, and CCBM.g is the general CCBM.
Figure 12Multigoal recognition results, meal goals. OM-o refers to the optimistic observation model, OM-p to the pessimistic observation model, HMM.u is the HMM with uninformed a priori goal probabilities, HMM.i is the HMM with informed a priori goal probabilities, CCBM.u is the CCBM with uninformed a priori goal probabilities, and CCBM.i is the CCBM with informed a priori goal probabilities.
Figure 13Multi-goal recognition results, healthy/unhealthy goals. OM-o refers to the optimistic observation model, OM-p to the pessimistic observation model, HMM.u is the HMM with uninformed a priori goal probabilities, HMM.i is the HMM with informed a priori goal probabilities, CCBM.u is the CCBM with uninformed a priori goal probabilities, and CCBM.i is the CCBM with informed a priori goal probabilities.
Figure 14Single-goal recognition results, meal goals. OM-o refers to the optimistic observation model, OM-p to the pessimistic observation model, HMM.u is the HMM with uninformed a priori goal probabilities, HMM.i is the HMM with informed a priori goal probabilities, CCBM.u is the CCBM with uninformed a priori goal probabilities, and CCBM.i is the CCBM with informed a priori goal probabilities.
Figure 15Example of the relationship between camera data and performed activity. The x-axis position extracted from the depth camera is given with black circles, while the annotated actions are given with red solid line.