| Literature DB >> 32770416 |
Tayfun Lloyd-Esenkaya1,2, Vanessa Lloyd-Esenkaya3, Eamonn O'Neill2, Michael J Proulx4,5.
Abstract
Sensory substitution techniques are perceptual and cognitive phenomena used to represent one sensory form with an alternative. Current applications of sensory substitution techniques are typically focused on the development of assistive technologies whereby visually impaired users can acquire visual information via auditory and tactile cross-modal feedback. But despite their evident success in scientific research and furthering theory development in cognition, sensory substitution techniques have not yet gained widespread adoption within sensory-impaired populations. Here we argue that shifting the focus from assistive to mainstream applications may resolve some of the current issues regarding the use of sensory substitution devices to improve outcomes for those with disabilities. This article provides a tutorial guide on how to use research into multisensory processing and sensory substitution techniques from the cognitive sciences to design new inclusive cross-modal displays. A greater focus on developing inclusive mainstream applications could lead to innovative technologies that could be enjoyed by every person.Entities:
Keywords: Cross-modal cognition; Cross-modal displays; Design for all; Human-computer interactions; Inclusion; Inclusive design; Multisensory perception; Sensory substitution; Universal design
Mesh:
Year: 2020 PMID: 32770416 PMCID: PMC7415050 DOI: 10.1186/s41235-020-00240-7
Source DB: PubMed Journal: Cogn Res Princ Implic ISSN: 2365-7464
Definitions of key terms from cognitive neuroscience studies to describe concepts relating to multisensory processing, as outlined by Stein et al., 2010
| Term | Definition | Property the concept is attributed to |
|---|---|---|
| Unisensory | Any neural or behavioural process associated with a single sense | Neural or behavioural responses |
| Multisensory | Any neural or behavioural process associated with multiple senses | Neural or behavioural responses |
| Cross-modal display mode | A display with multiple display modes to channel sensory information of different origins | Display type |
| Multisensory integration | A specific multisensory processing where redundant sensory information is optimally integrated to result in a multisensory response significantly different than their unisensory correspondence | Neural or behavioural response |
| Multisensory combination | A specific multisensory process where complementary sensory information is combined to result in a more accurate estimate of the sensory source | Neural or behavioural response |
| Modality-specific | The sensory information from a source that results in a unisensory response | The sensory source |
| Cross-modal | The sensory information from a source that results in a multisensory response | The sensory source |
| Spatial coincidence | The spatial overlap between two or more cross-modal stimuli of the same sensory source | The sensory source |
| Temporal coincidence | The temporal overlap between two or more cross-modal stimuli of the same sensory source | The sensory source |
| Redundancy | The reliability of spatially and temporally overlapping cross-modal stimuli of the same sensory source | The sensory source, and the neural and behavioural response |
| Complementary | Sensory information from the same sensory source that is not spatially or temporally overlapping | The sensory source, and the neural and behavioural response |
| Inverse effectiveness | The influence of reliability on cross-modal cues in multisensory processing | Neural or behavioural response |
| Cross-modal correspondence | Associations between different sensory forms | Neural or behavioural response |
Fig. 1Illustration of how unisensory and cross-modal display modes can utilise various sensory processing
Fig. 2Diagram to show how cross-modal associations can arise when two sensory forms overlap, in line with the metamodal hypothesis. For example, Sensory form A could be pitch (the perception of auditory frequency) and Sensory form B could be visuospatial elevation. The overlapping space would include forms that are “high” such as a high-pitched sound and an object high in elevation (cf. Melara & O’Brien, 1987).
Table with key references for a variety of cognitive tasks successfully completed by visual-to-tactile or visual-to-auditory sensory substitution techniques
| Tasks | Sensory domains | References |
|---|---|---|
| Object recognition | Visual-to-auditory | (Auvray, Hanneton, & O’Regan, |
| Visual-to-tactile | (Akita, Komatsu, Ito, Ono, & Okamoto, | |
| Localisation | Visual-to-auditory | (Auvray et al., |
| Visual-to-tactile | (Akita et al., | |
| Avoidance | Visual-to-auditory | (Borenstein, |
| Visual-to-tactile | (Cardin, Thalmann, & Vexo, | |
| Navigation | Visual-to-auditory | (Borenstein, |
| Visual-to-tactile | (Chebat et al., | |
| Emotion conveyance | Visual-to-auditory | (Striem-Amit et al., |
Fig. 3Image showing a user wearing an auditory-tactile cross-modal display prototype. The tactile information is created by BrainPort, a visual-to-tactile sensory substitution device. The auditory information is created by The vOICe, an auditory-to-tactile sensory substitution device. The camera fixated within the box provides real time feed to BrainPort and The vOICe. As a result, the user can perceive the camera feed via audio, tactile and audio-tactile cross-modal feedback