Literature DB >> 15296010

Specification of cross-modal source information in isolated kinematic displays of speech.

Lorin Lachs1, David B Pisoni.   

Abstract

Information about the acoustic properties of a talker's voice is available in optical displays of speech, and vice versa, as evidenced by perceivers' ability to match faces and voices based on vocal identity. The present investigation used point-light displays (PLDs) of visual speech and sinewave replicas of auditory speech in a cross-modal matching task to assess perceivers' ability to match faces and voices under conditions when only isolated kinematic information about vocal tract articulation was available. These stimuli were also used in a word recognition experiment under auditory-alone and audiovisual conditions. The results showed that isolated kinematic displays provide enough information to match the source of an utterance across sensory modalities. Furthermore, isolated kinematic displays can be integrated to yield better word recognition performance under audiovisual conditions than under auditory-alone conditions. The results are discussed in terms of their implications for describing the nature of speech information and current theories of speech perception and spoken word recognition.

Entities:  

Mesh:

Year:  2004        PMID: 15296010      PMCID: PMC3429945          DOI: 10.1121/1.1757454

Source DB:  PubMed          Journal:  J Acoust Soc Am        ISSN: 0001-4966            Impact factor:   1.840


  27 in total

1.  Chimaeric sounds reveal dichotomies in auditory perception.

Authors:  Zachary M Smith; Bertrand Delgutte; Andrew J Oxenham
Journal:  Nature       Date:  2002-03-07       Impact factor: 49.962

2.  Cross-modal source information and spoken word recognition.

Authors:  Lorin Lachs; David B Pisoni
Journal:  J Exp Psychol Hum Percept Perform       Date:  2004-04       Impact factor: 3.332

3.  Hearing a face: cross-modal speaker matching using isolated visible speech.

Authors:  Lawrence D Rosenblum; Nicolas M Smith; Sarah M Nichols; Steven Hale; Joanne Lee
Journal:  Percept Psychophys       Date:  2006-01

4.  Listening with eye and hand: cross-modal contributions to speech perception.

Authors:  C A Fowler; D J Dekle
Journal:  J Exp Psychol Hum Percept Perform       Date:  1991-08       Impact factor: 3.332

5.  Crossmodal Source Identification in Speech Perception.

Authors:  Lorin Lachs; David B Pisoni
Journal:  Ecol Psychol       Date:  2004

6.  Multimodal perceptual organization of speech: Evidence from tone analogs of spoken utterances.

Authors:  Robert E Remez; Jennifer M Fellowes; David B Pisoni; Winston D Goh; Philip E Rubin
Journal:  Speech Commun       Date:  1998-10-01       Impact factor: 2.017

7.  Point-light facial displays enhance comprehension of speech in noise.

Authors:  L D Rosenblum; J A Johnson; H M Saldaña
Journal:  J Speech Hear Res       Date:  1996-12

8.  The motor theory of speech perception revised.

Authors:  A M Liberman; I G Mattingly
Journal:  Cognition       Date:  1985-10

9.  Visual speech information for face recognition.

Authors:  Lawrence D Rosenblum; Deborah A Yakel; Naser Baseer; Anjani Panchal; Brynn C Nodarse; Ryan P Niehus
Journal:  Percept Psychophys       Date:  2002-02

10.  Face recognition and lipreading. A neurological dissociation.

Authors:  R Campbell; T Landis; M Regard
Journal:  Brain       Date:  1986-06       Impact factor: 13.501

View more
  8 in total

1.  Crossmodal Source Identification in Speech Perception.

Authors:  Lorin Lachs; David B Pisoni
Journal:  Ecol Psychol       Date:  2004

2.  Audiovisual speech perception: A new approach and implications for clinical populations.

Authors:  Julia Irwin; Lori DiBlasi
Journal:  Lang Linguist Compass       Date:  2017-03-26

3.  Experience with a talker can transfer across modalities to facilitate lipreading.

Authors:  Kauyumari Sanchez; James W Dias; Lawrence D Rosenblum
Journal:  Atten Percept Psychophys       Date:  2013-10       Impact factor: 2.199

4.  Implicit multisensory associations influence voice recognition.

Authors:  Katharina von Kriegstein; Anne-Lise Giraud
Journal:  PLoS Biol       Date:  2006-10       Impact factor: 8.029

5.  Seeing to hear? Patterns of gaze to speaking faces in children with autism spectrum disorders.

Authors:  Julia R Irwin; Lawrence Brancazio
Journal:  Front Psychol       Date:  2014-05-08

6.  Matching novel face and voice identity using static and dynamic facial images.

Authors:  Harriet M J Smith; Andrew K Dunn; Thom Baguley; Paula C Stacey
Journal:  Atten Percept Psychophys       Date:  2016-04       Impact factor: 2.199

7.  Temporal voice areas exist in autism spectrum disorder but are dysfunctional for voice identity recognition.

Authors:  Stefanie Schelinski; Kamila Borowiak; Katharina von Kriegstein
Journal:  Soc Cogn Affect Neurosci       Date:  2016-06-30       Impact factor: 3.436

8.  Matching Unfamiliar Voices to Static and Dynamic Faces: No Evidence for a Dynamic Face Advantage in a Simultaneous Presentation Paradigm.

Authors:  Sujata M Huestegge
Journal:  Front Psychol       Date:  2019-08-23
  8 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.