Joram J van Rheede1, Iain R Wilson1, Rose I Qian1, Susan M Downes2, Christopher Kennard3, Stephen L Hicks1. 1. Division of Clinical Neurology Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom. 2. Nuffield Laboratory of Ophthalmology, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford Eye Hospital, Oxford University Hospitals National Health Service (NHS) Trust, Oxford, United Kingdom 3National Institute for Health Research. 3. Division of Clinical Neurology Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom 3National Institute for Health Research (NIHR) Biomedical Research Centre, Oxford, United Kingdom.
Abstract
PURPOSE: Severe visual impairment can have a profound impact on personal independence through its effect on mobility. We investigated whether the mobility of people with vision low enough to be registered as blind could be improved by presenting the visual environment in a distance-based manner for easier detection of obstacles. METHODS: We accomplished this by developing a pair of "residual vision glasses" (RVGs) that use a head-mounted depth camera and displays to present information about the distance of obstacles to the wearer as brightness, such that obstacles closer to the wearer are represented more brightly. We assessed the impact of the RVGs on the mobility performance of visually impaired participants during the completion of a set of obstacle courses. Participant position was monitored continuously, which enabled us to capture the temporal dynamics of mobility performance. This allowed us to find correlates of obstacle detection and hesitations in walking behavior, in addition to the more commonly used measures of trial completion time and number of collisions. RESULTS: All participants were able to use the smart glasses to navigate the course, and mobility performance improved for those visually impaired participants with the worst prior mobility performance. However, walking speed was slower and hesitations increased with the altered visual representation. CONCLUSIONS: A depth-based representation of the visual environment may offer low vision patients improvements in independent mobility. It is important for further work to explore whether practice can overcome the reductions in speed and increased hesitation that were observed in our trial.
PURPOSE: Severe visual impairment can have a profound impact on personal independence through its effect on mobility. We investigated whether the mobility of people with vision low enough to be registered as blind could be improved by presenting the visual environment in a distance-based manner for easier detection of obstacles. METHODS: We accomplished this by developing a pair of "residual vision glasses" (RVGs) that use a head-mounted depth camera and displays to present information about the distance of obstacles to the wearer as brightness, such that obstacles closer to the wearer are represented more brightly. We assessed the impact of the RVGs on the mobility performance of visually impaired participants during the completion of a set of obstacle courses. Participant position was monitored continuously, which enabled us to capture the temporal dynamics of mobility performance. This allowed us to find correlates of obstacle detection and hesitations in walking behavior, in addition to the more commonly used measures of trial completion time and number of collisions. RESULTS: All participants were able to use the smart glasses to navigate the course, and mobility performance improved for those visually impaired participants with the worst prior mobility performance. However, walking speed was slower and hesitations increased with the altered visual representation. CONCLUSIONS: A depth-based representation of the visual environment may offer low vision patients improvements in independent mobility. It is important for further work to explore whether practice can overcome the reductions in speed and increased hesitation that were observed in our trial.
Authors: Nitish Padmanaban; Robert Konrad; Tal Stramer; Emily A Cooper; Gordon Wetzstein Journal: Proc Natl Acad Sci U S A Date: 2017-02-13 Impact factor: 11.205
Authors: Max Kinateder; Justin Gualtieri; Matt J Dunn; Wojciech Jarosz; Xing-Dong Yang; Emily A Cooper Journal: Optom Vis Sci Date: 2018-09 Impact factor: 1.973
Authors: Jo Lane; Emilie M F Rohan; Faran Sabeti; Rohan W Essex; Ted Maddess; Amy Dawel; Rachel A Robbins; Nick Barnes; Xuming He; Elinor McKone Journal: PLoS One Date: 2018-12-31 Impact factor: 3.240