| Literature DB >> 22615768 |
Jeanne A Guiraud1, Przemyslaw Tomalski, Elena Kushnerenko, Helena Ribeiro, Kim Davies, Tony Charman, Mayada Elsabbagh, Mark H Johnson.
Abstract
The language difficulties often seen in individuals with autism might stem from an inability to integrate audiovisual information, a skill important for language development. We investigated whether 9-month-old siblings of older children with autism, who are at an increased risk of developing autism, are able to integrate audiovisual speech cues. We used an eye-tracker to record where infants looked when shown a screen displaying two faces of the same model, where one face is articulating/ba/and the other/ga/, with one face congruent with the syllable sound being presented simultaneously, the other face incongruent. This method was successful in showing that infants at low risk can integrate audiovisual speech: they looked for the same amount of time at the mouths in both the fusible visual/ga/- audio/ba/and the congruent visual/ba/- audio/ba/displays, indicating that the auditory and visual streams fuse into a McGurk-type of syllabic percept in the incongruent condition. It also showed that low-risk infants could perceive a mismatch between auditory and visual cues: they looked longer at the mouth in the mismatched, non-fusible visual/ba/- audio/ga/display compared with the congruent visual/ga/- audio/ga/display, demonstrating that they perceive an uncommon, and therefore interesting, speech-like percept when looking at the incongruent mouth (repeated ANOVA: displays x fusion/mismatch conditions interaction: F(1,16) = 17.153, p = 0.001). The looking behaviour of high-risk infants did not differ according to the type of display, suggesting difficulties in matching auditory and visual information (repeated ANOVA, displays x conditions interaction: F(1,25) = 0.09, p = 0.767), in contrast to low-risk infants (repeated ANOVA: displays x conditions x low/high-risk groups interaction: F(1,41) = 4.466, p = 0.041). In some cases this reduced ability might lead to the poor communication skills characteristic of autism.Entities:
Mesh:
Year: 2012 PMID: 22615768 PMCID: PMC3352915 DOI: 10.1371/journal.pone.0036428
Source DB: PubMed Journal: PLoS One ISSN: 1932-6203 Impact factor: 3.240
Figure 1Stimuli and Areas of Interest (AOIs) in a mismatch display.
The face on the left side is incongruent with the sound (/ga/) and mouths/ba/, which is known to create a non-fused percept ‘bga’ in children and adults. The face on the right side is congruent with the sound (visual/ga/- audio/ga/).
Average looking times to faces, eyes, and mouths across displays in infants at low- and high-risk.
| Groups | Faces | Eyes | Mouths |
| Low-risk infants | 10.5 s (±1.4 s) | 1.4 s (±2.7 s) | 7.2 s (±3.5 s) |
| High-risk infants | 9.5 s (±2.4 s) | 0.9 s (±1 s) | 6.5 s (±2.5 s) |
Figure 2Looking time of infants at low versus high risk for autism in a McGurk paradigm.
Low-risk infants looked as long at the incongruent mouth as at the congruent mouth in the fusion condition, demonstrating that they can integrate AV speech information, and they looked longer at the incongruent mouth than at the congruent mouth in the mismatch condition, indicating that they perceive incongruent, non-fusible AV speech information. In contrast, high-risk infants had the same looking behaviours in both the mismatch and fusion conditions, reflecting poor AV integration and detection of incongruence between AV information. Error bars are standard error of the means. *p<0.05.