| Literature DB >> 26175705 |
Yuta Ujiie1, Tomohisa Asai2, Akio Wakabayashi3.
Abstract
The McGurk effect is a well-known illustration that demonstrates the influence of visual information on hearing in the context of speech perception. Some studies have reported that individuals with autism spectrum disorder (ASD) display abnormal processing of audio-visual speech integration, while other studies showed contradictory results. Based on the dimensional model of ASD, we administered two analog studies to examine the link between level of autistic traits, as assessed by the Autism Spectrum Quotient (AQ), and the McGurk effect among a sample of university students. In the first experiment, we found that autistic traits correlated negatively with fused (McGurk) responses. Then, we manipulated presentation types of visual stimuli to examine whether the local bias toward visual speech cues modulated individual differences in the McGurk effect. The presentation included four types of visual images, comprising no image, mouth only, mouth and eyes, and full face. The results revealed that global facial information facilitates the influence of visual speech cues on McGurk stimuli. Moreover, individual differences between groups with low and high levels of autistic traits appeared when the full-face visual speech cue with an incongruent voice condition was presented. These results suggest that individual differences in the McGurk effect might be due to a weak ability to process global facial information in individuals with high levels of autistic traits.Entities:
Keywords: Autism Spectrum Quotient; autism spectrum disorder; individual differences; local bias; the McGurk effect
Year: 2015 PMID: 26175705 PMCID: PMC4484977 DOI: 10.3389/fpsyg.2015.00891
Source DB: PubMed Journal: Front Psychol ISSN: 1664-1078
Mean and standard deviations (SD) of response rate in all conditions.
| Correct response (/pa/, /ta/, /ka/) | 0.97 | 0.03 | 0.00 |
| Correct response (/pa/, /ta/, /ka/) | 0.99 | 0.03 | −0.10 |
| Audio response (/pa/) | 0.35 | 0.29 | 0.29 |
| Fused response (/ta/) | 0.61 | 0.28 | −0.30 |
| Visual response (/ka/) | 0.04 | 0.08 | −0.19 |
Also displayed are correlations between the AQ scores and response rates for experiment 1. N = 46. AQ, Autism-spectrum Quotient. Correct response (/pa/, /ta/, /ka/): Mean correct responses for all stimuli in the audio-visual-congruent condition or the audio-only condition.
p < 0.05,
** p < 0.01.
Figure 1The response rate for each audio-visual-incongruent stimulus in the low-AQ group and the high-AQ group. Possible responses to the stimuli were audio response (/pa/ response), fused response (/ta/ response), and visual response (/ka/ response).
Figure 2Examples of the four types of visual stimuli used in Experiment 2. (A) No image (audio only). (B) Mouth-only presentation. (C) Eyes and mouth presentation. (D) Full-face image presentation. All of these images were presented with a congruent or incongruent voice in the experiment.
Figure 3Mean accuracy for each condition across all participants. In the audio-visual-congruent condition, mean accuracy is the mean rate of correct responses for the three syllables. In the audio-visual-incongruent condition, mean accuracy is the rate of audio response. Error bars indicate standard errors.
Figure 4Mean accuracy for the audio-visual-congruent stimuli in the low-AQ and high–AQ groups. Mean accuracy is the mean rate of correct responses for the three syllables. Error bars indicate standard errors.
Figure 5Mean audio response to the audio-visual-incongruent stimuli in the low-AQ and high–AQ groups. Error bars indicate standard errors.