| Literature DB >> 28344913 |
Colin Shewell1, Chris Nugent1, Mark Donnelly1, Haiying Wang1, Macarena Espinilla2.
Abstract
The recent growth in the wearable sensor market has stimulated new opportunities within the domain of Ambient Assisted Living, providing unique methods of collecting occupant information. This approach leverages contemporary wearable technology, Google Glass, to facilitate a unique first-person view of the occupants immediate environment. Machine vision techniques are employed to determine an occupant's location via environmental object detection. This method provides additional secondary benefits such as first person tracking within the environment and lack of required sensor interaction to determine occupant location. Object recognition is performed using the Oriented Features from Accelerated Segment Test and Rotated Binary Robust Independent Elementary Features algorithm with a K-Nearest Neighbour matcher to match the saved key-points of the objects to the scene. To validate the approach, an experimental set-up consisting of three ADL routines, each containing at least ten activities, ranging from drinking water to making a meal were considered. Ground truth was obtained from manually annotated video data and the approach was previously benchmarked against a common method of indoor localisation that employs dense sensor placement in order to validate the approach resulting in a recall, precision, and F-measure of 0.82, 0.96, and 0.88 respectively. This paper will go on to assess to the viability of applying the solution to differing environments, both in terms of performance and along with a qualitative analysis on the practical aspects of installing such a system within differing environments.Entities:
Keywords: Ageing in place; Ambient assisted living; Context-aware services; Machine vision; Wearable computing
Year: 2016 PMID: 28344913 PMCID: PMC5346438 DOI: 10.1007/s12553-016-0159-x
Source DB: PubMed Journal: Health Technol (Berl) ISSN: 2190-7196
Fig. 1High level overview of machine vision system processing - consisting of a pre-processing section and a real-time processing section
Comparison of offloading vs. on-board processing. Mean over five runs, standard deviation showing in parentheses [10]
| Metric | On-Board | Offloading |
|---|---|---|
| Per-Image Energy | 12.84 (0.36) | 1.14 (0.11) |
| Per-Image Speed | 10.49 (0.23) | 1.28 (0.12) |
Fig. 2Image (a) is an example of the markers used, Image (b) shows how the marker is applied to an object of interest, in this case a telephone
Full list of activities that were performed during the three routines
| Full Activity List | |
|---|---|
| 1.1 | Prepare/drink water |
| 1.2 | Prepare/drink tea |
| 1.3 | Prepare/drink hot chocolate |
| 1.4 | Prepare/drink milk |
| 2 | Make/receive phone call |
| 3.1 | Prepare/eat cold meal |
| 3.2 | Prepare/eat hot meal |
| 4 | Watch TV |
| 5 | Wash dishes |
Breakdown of activities that took place in each routine
| Routine 1 (R1) | Routine 2 (R2) | Routine 3 (R3) |
|---|---|---|
| 1.3 | 1.4 | 1.3 |
| 1.1 | 3.1 | 1.1 |
| 3.2 | 1.1 | 2 |
| 5 | 2 | 3.2 |
| 4 | 1.1 | 1.1 |
| 1.1 | 1.2 | 4 |
| 4 | 4 | 1.2 |
| 3.1 | 3.2 | 4 |
| 5 | 5 | 3.1 |
| 1.1 | 4 | 5 |
| N/A | 1.1 | 1.4 |
Results of Recall, Precision, and F-Measure for the machine vision based system – UU
| Routine | Total Events | Recall | Precision | F-Measure |
|---|---|---|---|---|
| R1 | 58 | 0.74 | 0.98 | 0.84 |
| R2 | 56 | 0.88 | 0.94 | 0.91 |
| R3 | 61 | 0.84 | 0.96 | 0.89 |
| Total | 175 | 0.82 | 0.96 | 0.88 |
Breakdown of machine vision sensor classification outcomes including TP, FN, and FP – UU
| Routine | Total Events | TP | FN | FP |
|---|---|---|---|---|
| R1 | 58 | 43 | 15 | 1 |
| R2 | 56 | 49 | 7 | 3 |
| R3 | 61 | 51 | 10 | 2 |
| Total | 175 | 143 | 32 | 6 |
Breakdown of machine vision sensor classification outcomes including TP, FN, and FP – Jaèn
| Routine | Total Events | TP | FN | FP |
|---|---|---|---|---|
| R1 | 58 | 39 | 19 | 1 |
| R2 | 56 | 38 | 18 | 1 |
| R3 | 61 | 39 | 22 | 1 |
| Total | 175 | 116 | 59 | 3 |
Results of Recall, Precision, and F-Measure for the dense sensor based system
| Routine | Total Events | Recall | Precision | F-Measure |
|---|---|---|---|---|
| R1 | 58 | 1.00 | 1.00 | 1.00 |
| R2 | 56 | 0.93 | 1.00 | 0.96 |
| R3 | 61 | 0.90 | 1.00 | 0.95 |
| Total | 175 | 0.94 | 1.00 | 0.97 |
Breakdown of dense sensor classification outcomes including TP, FN, and FP
| Routine | Total Events | TP | FN | FP |
|---|---|---|---|---|
| R1 | 58 | 58 | 0 | 0 |
| R2 | 56 | 52 | 4 | 0 |
| R3 | 61 | 55 | 6 | 0 |
| Total | 175 | 165 | 10 | 0 |
Results of Recall, Precision, and F-Measure for the machine vision based system – Jaén
| Routine | Total Events | Recall | Precision | F-Measure |
|---|---|---|---|---|
| R1 | 58 | 0.67 | 0.98 | 0.80 |
| R2 | 56 | 0.68 | 0.97 | 0.80 |
| R3 | 61 | 0.64 | 0.98 | 0.77 |
| Total | 175 | 0.66 | 0.97 | 0.79 |
A breakdown of FN machine vision events – UU
| Cause | FN |
|---|---|
| Corrupt frame | 16 |
| Other | 8 |
| Unknown | 8 |
| Total | 32 |
A breakdown of FN machine vision events – Jaén
| Cause | FN |
|---|---|
| Unfocused | 47 |
| Other | 12 |
| Total | 59 |
A breakdown of costs with associated sensor platforms [1]
| System | Cost | Installation |
|---|---|---|
| Elk M1 | $5,000 | DIY |
| Lagotek | $5,000 | DIY |
| Control4 | $50,000 | DIY |
| X10 | $300 | DIY |
| Creston | $60,000 | Professional |
| Control4 | $120,000 | Professional |
| EIB Instabus | $13,500 | Professional |