| Literature DB >> 27019586 |
Chloé Stoll1, Richard Palluel-Germain1, Vincent Fristot2, Denis Pellerin2, David Alleysson1, Christian Graff1.
Abstract
Background. Common manufactured depth sensors generate depth images that humans normally obtain from their eyes and hands. Various designs converting spatial data into sound have been recently proposed, speculating on their applicability as sensory substitution devices (SSDs). Objective. We tested such a design as a travel aid in a navigation task. Methods. Our portable device (MeloSee) converted 2D array of a depth image into melody in real-time. Distance from the sensor was translated into sound intensity, stereo-modulated laterally, and the pitch represented verticality. Twenty-one blindfolded young adults navigated along four different paths during two sessions separated by one-week interval. In some instances, a dual task required them to recognize a temporal pattern applied through a tactile vibrator while they navigated. Results. Participants learnt how to use the system on both new paths and on those they had already navigated from. Based on travel time and errors, performance improved from one week to the next. The dual task was achieved successfully, slightly affecting but not preventing effective navigation. Conclusions. The use of Kinect-type sensors to implement SSDs is promising, but it is restricted to indoor use and it is inefficient on too short range.Entities:
Year: 2015 PMID: 27019586 PMCID: PMC4745448 DOI: 10.1155/2015/543492
Source DB: PubMed Journal: Appl Bionics Biomech ISSN: 1176-2322 Impact factor: 1.781
Figure 1A blindfolded participant equipped with the set-up called MeloSee. The participant holds the distraction apparatus in her right hand.
Figure 2Visual-auditory sensory substitution flowchart. The high-input information throughput is significantly reduced before being converted into sound.
Figure 3Retinal depth encoder. (a) Grayscale depth map. (b) Activity computation. (c) RF activities: the closer the object, the lighter the disc (Figure 3(b)) and the louder the sound; the farther the object, the darker the disc and the softer the sound.
Figure 4Paths used in the experiment. Path A (left) was 22 m long and path B (right) was 19.6 m long. Squares indicate either the start or finish, depending on the route direction.
Experimental block design for the two sessions.
| Trial | Week session 1 | Week session 2 | |||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 11 | 12 | 13 | 21 | 22 | 14 | 11 | 12 | 13 | 21 | 22 | 14 | ||
| Series | #1 | A+ | A+ | A+ | B+ |
|
| A− | A− | A− | B− |
|
|
| #2 | A− | A− | A− | B− |
|
| A+ | A+ | A+ | B+ |
|
| |
| #3 | B+ | B+ | B+ | A+ |
|
| B− | B− | B− | A− |
|
| |
| #4 | B− | B− | B− | A− |
|
| B+ | B+ | B+ | A+ |
|
| |
Each series (#1 to #4) started with a different route and used the four routes (A+ and A−, B+ and B−) in a different order. In each session, the second run of the second route (22) and the fourth run of the first route (14) were navigated while performing a distractive task (bold routes). For each participant, same-order trials between the two sessions were run on the same path (either A or B) but in the other direction (either + or −).
Figure 5Navigation performance in the six experimental trials during the first and the second week sessions. Trials 11, 12, 13, and 14 are conducted on the same path and trials 21 and 22 on another one. Cognitive load was added in trials 2 and 1 . Top panel (a): travel time in seconds (note that the Y-axis starts from 100 s); middle panel (b): number of contacts with the walls; bottom panel (c): number of U-turns. Error bars represent standard errors of the means. Stars indicate significant differences with ** P < 0.01; *** P < 0.001.