| Literature DB >> 30774566 |
Bing Li1, J Pablo Muñoz2, Xuejian Rong1, Qingtian Chen1, Jizhong Xiao3, Yingli Tian4, Aries Arditi5, Mohammed Yousuf6.
Abstract
This paper presents a new holistic vision-based mobile assistive navigation system to help blind and visually impaired people with indoor independent travel. The system detects dynamic obstacles and adjusts path planning in real-time to improve navigation safety. First, we develop an indoor map editor to parse geometric information from architectural models and generate a semantic map consisting of a global 2D traversable grid map layer and context-aware layers. By leveraging the visual positioning service (VPS) within the Google Tango device, we design a map alignment algorithm to bridge the visual area description file (ADF) and semantic map to achieve semantic localization. Using the on-board RGB-D camera, we develop an efficient obstacle detection and avoidance approach based on a time-stamped map Kalman filter (TSM-KF) algorithm. A multi-modal human-machine interface (HMI) is designed with speech-audio interaction and robust haptic interaction through an electronic SmartCane. Finally, field experiments by blindfolded and blind subjects demonstrate that the proposed system provides an effective tool to help blind individuals with indoor navigation and wayfinding.Entities:
Keywords: Google Tango device; Indoor assistive navigation; blind and visually impaired people; obstacle avoidance; semantic maps
Year: 2018 PMID: 30774566 PMCID: PMC6371975 DOI: 10.1109/TMC.2018.2842751
Source DB: PubMed Journal: IEEE Trans Mob Comput ISSN: 1536-1233 Impact factor: 5.577