| Literature DB >> 31817152 |
Yusuke Kajiwara1, Haruhiko Kimura1.
Abstract
It is difficult for visually impaired people to move indoors and outdoors. In 2018, world health organization (WHO) reported that there were about 253 million people around the world who were moderately visually impaired in distance vision. A navigation system that combines positioning and obstacle detection has been actively researched and developed. However, when these obstacle detection methods are used in high-traffic passages, since many pedestrians cause an occlusion problem that obstructs the shape and color of obstacles, these obstacle detection methods significantly decrease in accuracy. To solve this problem, we developed an application "Follow me!". The application recommends a safe route by machine learning the gait and walking route of many pedestrians obtained from the monocular camera images of a smartphone. As a result of the experiment, pedestrians walking in the same direction as visually impaired people, oncoming pedestrians, and steps were identified with an average accuracy of 0.92 based on the gait and walking route of pedestrians acquired from monocular camera images. Furthermore, the results of the recommended safe route based on the identification results showed that the visually impaired people were guided to a safe route with 100% accuracy. In addition, visually impaired people avoided obstacles that had to be detoured during construction and signage by walking along the recommended route.Entities:
Keywords: human flow; object identification; safe route recommendation; visually impaired
Mesh:
Year: 2019 PMID: 31817152 PMCID: PMC6960507 DOI: 10.3390/s19245343
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1Overview of the proposed system “Follow me!”.
Figure 2Coordinate system of visually impaired people, smartphones, and pedestrians.
Figure 3Joint parts, depth map, and skeleton are acquired with OpenPose.
Figure 4Recommending a safe route by a brute-force search based on the average score of the target area and surrounding areas.
Figure 5Experimental environment for evaluating the accuracy of identification and recommendation.
Identification results of the pedestrians walking in the same direction as the visually impaired people, and oncoming pedestrians and steps.
| Observation Results | Accuracy | ||||
|---|---|---|---|---|---|
| Same | Oncoming | Step | |||
| Identification results | Same | 181 | 0 | 3 | 0.93 |
| Oncoming | 7 | 45 | 0 | 1.00 | |
| Step | 7 | 0 | 15 | 0.83 | |
| Average | 0.92 | ||||
Top 10 of the important variables for identification.
| Variable | Mean Decrease Gini |
|---|---|
| Maximum speed in the z-axis of trunk | 50.9 |
| Average speed in the z-axis of trunk | 47.9 |
| Minimum speed in the z-axis of trunk | 32.2 |
| Standard deviation speed in the z-axis of trunk | 15.7 |
| Standard deviation speed in the x-axis of trunk | 13.3 |
| Minimum speed in the x-axis of trunk | 12.5 |
| Maximum speed in the x-axis of trunk | 10 |
| Average of y-axis trunk coordinate | 5.46 |
| Average of x-axis eye relative coordinate | 5.13 |
Results of multiple comparison tests on the variables in Table 2.
| Variable | |||
|---|---|---|---|
| Same vs. Oncoming | Same vs. Step | Oncoming vs. Step | |
| Maximum speed in the z-axis of trunk |
|
|
|
| Average speed in the z-axis of trunk |
| 0.02 |
|
| Minimum speed in the z-axis of trunk |
| 0.54 |
|
| Standard deviation speed in the z-axis of trunk |
|
|
|
| Standard deviation speed in the x-axis of trunk |
| 0.33 |
|
| Minimum speed in the x-axis of trunk |
| 0.73 |
|
| Maximum speed in the x-axis of trunk | 0.20 |
|
|
| Average of y-axis trunk coordinate | 0.94 |
|
|
| Average of x-axis eye relative coordinate | 0.09 | 0.05 | 0.02 |
Figure 6Identification results by area.
Figure 7Identification results by area.