Literature DB >> 22545598

Prosodic temporal alignment of co-speech gestures to speech facilitates referent resolution.

Alexandra Jesse1, Elizabeth K Johnson.   

Abstract

Using a referent detection paradigm, we examined whether listeners can determine the object speakers are referring to by using the temporal alignment between the motion speakers impose on objects and their labeling utterances. Stimuli were created by videotaping speakers labeling a novel creature. Without being explicitly instructed to do so, speakers moved the creature during labeling. Trajectories of these motions were used to animate photographs of the creature. Participants in subsequent perception studies heard these labeling utterances while seeing side-by-side animations of two identical creatures in which only the target creature moved as originally intended by the speaker. Using the cross-modal temporal relationship between speech and referent motion, participants identified which creature the speaker was labeling, even when the labeling utterances were low-pass filtered to remove their semantic content or replaced by tone analogues. However, when the prosodic structure was eliminated by reversing the speech signal, participants no longer detected the referent as readily. These results provide strong support for a prosodic cross-modal alignment hypothesis. Speakers produce a perceptible link between the motion they impose upon a referent and the prosodic structure of their speech, and listeners readily use this prosodic cross-modal relationship to resolve referential ambiguity in word-learning situations.

Entities:  

Mesh:

Year:  2012        PMID: 22545598     DOI: 10.1037/a0027921

Source DB:  PubMed          Journal:  J Exp Psychol Hum Percept Perform        ISSN: 0096-1523            Impact factor:   3.332


  4 in total

1.  Perception of the multisensory coherence of fluent audiovisual speech in infancy: its emergence and the role of experience.

Authors:  David J Lewkowicz; Nicholas J Minar; Amy H Tift; Melissa Brandon
Journal:  J Exp Child Psychol       Date:  2014-11-11

2.  Cross-cultural evidence for multimodal motherese: Asian Indian mothers' adaptive use of synchronous words and gestures.

Authors:  Lakshmi Gogate; Madhavilatha Maganti; Lorraine E Bahrick
Journal:  J Exp Child Psychol       Date:  2014-10-04

3.  Tone of voice guides word learning in informative referential contexts.

Authors:  Eva Reinisch; Alexandra Jesse; Lynne C Nygaard
Journal:  Q J Exp Psychol (Hove)       Date:  2012-11-08       Impact factor: 2.143

4.  Beat gestures influence which speech sounds you hear.

Authors:  Hans Rutger Bosker; David Peeters
Journal:  Proc Biol Sci       Date:  2021-01-27       Impact factor: 5.530

  4 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.