| Literature DB >> 28966930 |
Antoine J Shahin1, Stanley Shen1, Jess R Kerlin1.
Abstract
We examined the relationship between tolerance for audiovisual onset asynchrony (AVOA) and the spectrotemporal fidelity of the spoken words and the speaker's mouth movements. In two experiments that only varied in the temporal order of sensory modality, visual speech leading (exp1) or lagging (exp2) acoustic speech, participants watched intact and blurred videos of a speaker uttering trisyllabic words and nonwords that were noise vocoded with 4-, 8-, 16-, and 32-channels. They judged whether the speaker's mouth movements and the speech sounds were in-sync or out-of-sync. Individuals perceived synchrony (tolerated AVOA) on more trials when the acoustic speech was more speech-like (8 channels and higher vs. 4 channels), and when visual speech was intact than blurred (exp1 only). These findings suggest that enhanced spectrotemporal fidelity of the audiovisual (AV) signal prompts the brain to widen the window of integration promoting the fusion of temporally distant AV percepts.Entities:
Keywords: Audiovisual integration; Audiovisual onset asynchrony; Degraded speech; Spectrotemporal fidelity
Year: 2017 PMID: 28966930 PMCID: PMC5617130 DOI: 10.1080/23273798.2017.1283428
Source DB: PubMed Journal: Lang Cogn Neurosci ISSN: 2327-3798 Impact factor: 2.331