| Literature DB >> 31636554 |
Paméla Trudeau-Fisette1,2, Takayuki Ito3,4, Lucie Ménard1,2.
Abstract
Multisensory integration (MSI) allows us to link sensory cues from multiple sources and plays a crucial role in speech development. However, it is not clear whether humans have an innate ability or whether repeated sensory input while the brain is maturing leads to efficient integration of sensory information in speech. We investigated the integration of auditory and somatosensory information in speech processing in a bimodal perceptual task in 15 young adults (age 19-30) and 14 children (age 5-6). The participants were asked to identify if the perceived target was the sound /e/ or /ø/. Half of the stimuli were presented under a unimodal condition with only auditory input. The other stimuli were presented under a bimodal condition with both auditory input and somatosensory input consisting of facial skin stretches provided by a robotic device, which mimics the articulation of the vowel /e/. The results indicate that the effect of somatosensory information on sound categorization was larger in adults than in children. This suggests that integration of auditory and somatosensory information evolves throughout the course of development.Entities:
Keywords: adults; auditory and somatosensory feedback; categorization; children; maturation; multisensory integration; speech perception
Year: 2019 PMID: 31636554 PMCID: PMC6788346 DOI: 10.3389/fnhum.2019.00344
Source DB: PubMed Journal: Front Hum Neurosci ISSN: 1662-5161 Impact factor: 3.169
Formant and bandwidth values of the synthesized stimuli used in the perceptual task.
| Formant values | Bandwidths values | |||||||||
|---|---|---|---|---|---|---|---|---|---|---|
| F1 | F2 | F3 | F4 | F5 | B1 | B2 | B3 | B4 | B5 | |
| Auditory stimuli | ||||||||||
| 1 | 364 | 1,922 | 2,509 | 3,550 | 4,000 | 48 | 55 | 60 | 50 | 100 |
| 2 | 364 | 1,892 | 2,469 | 3,500 | 4,000 | 48 | 55 | 60 | 50 | 100 |
| 3 | 364 | 1,862 | 2,429 | 3,450 | 4,000 | 48 | 55 | 60 | 50 | 100 |
| 4 | 364 | 1,832 | 2,389 | 3,400 | 4,000 | 48 | 55 | 60 | 50 | 100 |
| 5 | 364 | 1,802 | 2,349 | 3,350 | 4,000 | 48 | 55 | 60 | 50 | 100 |
| 6 | 364 | 1,772 | 2,309 | 3,300 | 4,000 | 48 | 55 | 60 | 50 | 100 |
| 7 | 364 | 1,742 | 2,269 | 3,250 | 4,000 | 48 | 55 | 60 | 50 | 100 |
| 8 | 364 | 1,712 | 2,229 | 3,200 | 4,000 | 48 | 55 | 60 | 50 | 100 |
| 9 | 364 | 1,682 | 2,189 | 3,150 | 4,000 | 48 | 55 | 60 | 50 | 100 |
| 10 | 364 | 1,652 | 2,149 | 3,100 | 4, 000 | 48 | 55 | 60 | 50 | 100 |
Figure 1Experimental set up for facial skin stretch perturbations (reproduced with permission from Ito and Ostry, 2010).
Figure 2Percent identification of the vowel [e] for stimuli on the [e–ø] continuum, in both experimental conditions, for both groups. Error bars indicate standard errors.
Figure 3Psychometric functions of labeling slope and 50% crossover boundary, in both experimental conditions, for both groups. Error bars indicate standard errors. *p < 0.05; **p < 0.01; ***p < 0.001.