| Literature DB >> 15053696 |
Abstract
In a cross-modal matching task, participants were asked to match visual and auditory displays of speech based on the identity of the speaker. The present investigation used this task with acoustically transformed speech to examine the properties of sound that can convey cross-modal information. Word recognition performance was also measured under the same transformations. The authors found that cross-modal matching was only possible under transformations that preserved the relative spectral and temporal patterns of formant frequencies. In addition, cross-modal matching was only possible under the same conditions that yielded robust word recognition performance. The results are consistent with the hypothesis that acoustic and optical displays of speech simultaneously carry articulatory information about both the underlying linguistic message and indexical properties of the talker. ((c) 2004 APA, all rights reserved)Entities:
Mesh:
Year: 2004 PMID: 15053696 PMCID: PMC3432944 DOI: 10.1037/0096-1523.30.2.378
Source DB: PubMed Journal: J Exp Psychol Hum Percept Perform ISSN: 0096-1523 Impact factor: 3.332