| Literature DB >> 27212940 |
Enrique Dorronzoro Zubiete1, Keigo Nakahata1, Nevrez Imamoglu2, Masashi Sekine3, Guanghao Sun4, Isabel Gomez5, Wenwei Yu1.
Abstract
Increasing population age demands more services in healthcare domain. It has been shown that mobile robots could be a potential solution to home biomonitoring for the elderly. Through our previous studies, a mobile robot system that is able to track a subject and identify his daily living activities has been developed. However, the system has not been tested in any home living scenarios. In this study we did a series of experiments to investigate the accuracy of activity recognition of the mobile robot in a home living scenario. The daily activities tested in the evaluation experiment include watching TV and sleeping. A dataset recorded by a distributed distance-measuring sensor network was used as a reference to the activity recognition results. It was shown that the accuracy is not consistent for all the activities; that is, mobile robot could achieve a high success rate in some activities but a poor success rate in others. It was found that the observation position of the mobile robot and subject surroundings have high impact on the accuracy of the activity recognition, due to the variability of the home living daily activities and their transitional process. The possibility of improvement of recognition accuracy has been shown too.Entities:
Mesh:
Year: 2016 PMID: 27212940 PMCID: PMC4860229 DOI: 10.1155/2016/9845816
Source DB: PubMed Journal: Comput Intell Neurosci
Figure 1Home biomonitoring robot system.
Figure 2Smart sensor. The sensor is connected to the TelosB mote that provides the signal conditioning and the services and transmission technology defined in the 1451 standards sections 0 and 5, respectively.
Figure 3Layout of two rooms for the daily living scenario. Red dots show the position of the distance-measuring sensors.
Figure 4Planned situations and sites. The number represents the order and ID of the scene. The basic activity contained in each situation is presented in Table 1.
Scene indicates the action performed by the subject while the basic activity is the activity, included in the scene, that the robot will identify.
| ID | Scene | Basic activity | Timeline |
|---|---|---|---|
| 1 | Returning home | Walking | 00:00 |
| 2 | Washing hands | Bending | 00:00–00:01 (1 min) |
| 3 | Watching TV | Sitting | 00:01–00:15 (15 min) |
| 4 | Having a drink | Standing and bending | 00:15–00:16 (1 min) |
| 5 | Reading the newspaper | Sitting | 00:16–00:30 (15 min) |
| 6 | Reading a book | Sitting | 00:30–00:40 (10 min) |
| 7 | Picking something from the shelf | Bending | 00:40–00:42 (2 min) |
| 8 | Stepping | Walking | 00:42–00:50 (8 min) |
| 9 | Sleeping | Lying down | 00:50–01:00 (10 min) |
Summary of the activity recognition results.
| Summary | Totals |
|---|---|
| Frame | 43775 |
| Matched frame | 33773 |
| Accuracy | 77.15 |
Recognition accuracy grouped by activity.
| Standing | Walking | Bending | Sitting | Lying down | |
|---|---|---|---|---|---|
| Frame | 1610 | 4362 | 995 | 28052 | 8000 |
| Matched frame | 740 | 1655 | 448 | 22792 | 7394 |
| Accuracy (%) | 45.96 | 37.94 | 45.02 | 81.24 | 92.42 |
| Standard deviation | 31.77 | 16.2 | 20.1 | 9.89 | 0 |
Activity recognition of scene transition phases (1).
| Activity | ① → ② | ② | ② → ③ | ③ | ③ → ④ | ④ | ||||
|---|---|---|---|---|---|---|---|---|---|---|
| Standing | Walking | Bending | Standing | Walking | Sitting | Standing | Walking | Standing | Bending | |
| Frame | 37 | 157 | 393 | 10 | 120 | 9872 | 38 | 173 | 80 | 477 |
| Matched frame | 7 | 86 | 224 | 0 | 65 | 6833 | 31 | 50 | 11 | 205 |
| Accuracy (%) | 18.91 | 54.77 | 56.99 | 0 | 54.16 | 69.21 | 81.57 | 28.90 | 13.75 | 42.97 |
Activity recognition of scene transition phases (2).
| Activity | ④ → ⑤ | ⑤ | ⑤ → ⑥ | ⑥ | ⑥ → ⑦ → ⑧ | |||||
|---|---|---|---|---|---|---|---|---|---|---|
| Standing | Walking | Sitting | Standing | Walking | Sitting | Standing | Walking | Bending | Bending | |
| Frame | 15 | 138 | 9691 | 400 | 266 | 8489 | 544 | 281 | 58 | 26 |
| Matched frame | 6 | 34 | 9056 | 306 | 162 | 6903 | 227 | 185 | 9 | 8 |
| Accuracy (%) | 40.00 | 24.63 | 93.44 | 76.50 | 60.90 | 81.31 | 41.72 | 65.83 | 15.51 | 30.76 |
Activity recognition of scene transition phases (3).
| Activity | ⑧ | ⑧ → ⑥ → ⑨ | ⑨ | ||||
|---|---|---|---|---|---|---|---|
| Standing | Walking | Standing | Walking | Bending | Bending | Lying down | |
| Frame | 756 | 2631 | 486 | 596 | 16 | 25 | 8000 |
| Matched frame | 744 | 707 | 152 | 366 | 2 | 0 | 7394 |
| Accuracy (%) | 98.41 | 26.87 | 31.27 | 61.40 | 12.50 | 0 | 92.42 |
Figure 5Distance-measuring sensor data for 1-hour experiment. The scene ID included in the graph corresponds with the ones in Table 1.
Activity recognition of standing trial of test 2.
| 0.5 m | 1.0 m | 1.5 m | 2.0 m | |
|---|---|---|---|---|
| Frame | 1400 | 1400 | 1400 | 1400 |
| Matched frame | 0 | 947 | 1354 | 416 |
| Accuracy (%) | 0.00 | 67.64 | 96.71 | 29.71 |
Figure 6Global ratio of the activities performed by the subject and recognized by the robot.
Figure 7The box and the wall are at the same depth of the person. The activity recognition algorithm limitations include these two objects as part of the body contour.