Literature DB >> 28321887

Simplification of Visual Rendering in Simulated Prosthetic Vision Facilitates Navigation.

Victor Vergnieux1, Marc J-M Macé1, Christophe Jouffrais1.   

Abstract

Visual neuroprostheses are still limited and simulated prosthetic vision (SPV) is used to evaluate potential and forthcoming functionality of these implants. SPV has been used to evaluate the minimum requirement on visual neuroprosthetic characteristics to restore various functions such as reading, objects and face recognition, object grasping, etc. Some of these studies focused on obstacle avoidance but only a few investigated orientation or navigation abilities with prosthetic vision. The resolution of current arrays of electrodes is not sufficient to allow navigation tasks without additional processing of the visual input. In this study, we simulated a low resolution array (15 × 18 electrodes, similar to a forthcoming generation of arrays) and evaluated the navigation abilities restored when visual information was processed with various computer vision algorithms to enhance the visual rendering. Three main visual rendering strategies were compared to a control rendering in a wayfinding task within an unknown environment. The control rendering corresponded to a resizing of the original image onto the electrode array size, according to the average brightness of the pixels. In the first rendering strategy, vision distance was limited to 3, 6, or 9 m, respectively. In the second strategy, the rendering was not based on the brightness of the image pixels, but on the distance between the user and the elements in the field of view. In the last rendering strategy, only the edges of the environments were displayed, similar to a wireframe rendering. All the tested renderings, except the 3 m limitation of the viewing distance, improved navigation performance and decreased cognitive load. Interestingly, the distance-based and wireframe renderings also improved the cognitive mapping of the unknown environment. These results show that low resolution implants are usable for wayfinding if specific computer vision algorithms are used to select and display appropriate information regarding the environment.
© 2017 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

Entities:  

Keywords:  Blind; Computer vision; Navigation; Retinal implant; Spatial cognition; Visual neuroprostheses; Wayfinding

Mesh:

Year:  2017        PMID: 28321887     DOI: 10.1111/aor.12868

Source DB:  PubMed          Journal:  Artif Organs        ISSN: 0160-564X            Impact factor:   3.094


  5 in total

1.  Perceptual brightness scales in a White's effect stimulus are not captured by multiscale spatial filtering models of brightness perception.

Authors:  Joris Vincent; Technische Universität Berlin Germany; Technische Universität Berlin Germany
Journal:  J Vis       Date:  2022-02-01       Impact factor: 2.240

2.  Beyond the Cane: Describing Urban Scenes to Blind People for Mobility Tasks.

Authors:  Karst M P Hoogsteen; Sarit Szpiro; Gabriel Kreiman; Eli Peli
Journal:  ACM Trans Access Comput       Date:  2022-08-19

3.  End-to-end optimization of prosthetic vision.

Authors:  Jaap de Ruyter van Steveninck; Umut Güçlü; Richard van Wezel; Marcel van Gerven
Journal:  J Vis       Date:  2022-02-01       Impact factor: 2.004

4.  Semantic and structural image segmentation for prosthetic vision.

Authors:  Melani Sanchez-Garcia; Ruben Martinez-Cantin; Jose J Guerrero
Journal:  PLoS One       Date:  2020-01-29       Impact factor: 3.240

5.  Real-world indoor mobility with simulated prosthetic vision: The benefits and feasibility of contour-based scene simplification at different phosphene resolutions.

Authors:  Jaap de Ruyter van Steveninck; Tom van Gestel; Paula Koenders; Guus van der Ham; Floris Vereecken; Umut Güçlü; Marcel van Gerven; Yagmur Güçlütürk; Richard van Wezel
Journal:  J Vis       Date:  2022-02-01       Impact factor: 2.240

  5 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.