Literature DB >> 8959601

Point-light facial displays enhance comprehension of speech in noise.

L D Rosenblum1, J A Johnson, H M Saldaña.   

Abstract

Seeing a talker's face can improve the perception of speech in noise. There is little known about which characteristics of the face are useful for enhancing the degraded signal. In this study, a point-light technique was employed to help isolate the salient kinematic aspects of a visible articulating face. In this technique, fluorescent dots were arranged on the lips, teeth, tongue, cheeks, and jaw of an actor. The actor was videotaped speaking in the dark, so that when shown to observers, only the moving dots were seen. To test whether these reduced images could contribute to the perception of degraded speech, noise-embedded sentences were dubbed with the point-light images at various signal-to-noise ratios. It was found that these images could significantly improve comprehension for adults with normal hearing and that the images became more effective as participants gained experience with the stimuli. These results have implications for uncovering salient visual speech information as well as in the development of telecommunication systems for listeners who are hearing impaired.

Entities:  

Mesh:

Year:  1996        PMID: 8959601     DOI: 10.1044/jshr.3906.1159

Source DB:  PubMed          Journal:  J Speech Hear Res        ISSN: 0022-4685


  36 in total

1.  Use of audiovisual information in speech perception by prelingually deaf children with cochlear implants: a first report.

Authors:  L Lachs; D B Pisoni; K I Kirk
Journal:  Ear Hear       Date:  2001-06       Impact factor: 3.570

2.  Specification of cross-modal source information in isolated kinematic displays of speech.

Authors:  Lorin Lachs; David B Pisoni
Journal:  J Acoust Soc Am       Date:  2004-07       Impact factor: 1.840

3.  Cross-modal source information and spoken word recognition.

Authors:  Lorin Lachs; David B Pisoni
Journal:  J Exp Psychol Hum Percept Perform       Date:  2004-04       Impact factor: 3.332

4.  Infants deploy selective attention to the mouth of a talking face when learning speech.

Authors:  David J Lewkowicz; Amy M Hansen-Tift
Journal:  Proc Natl Acad Sci U S A       Date:  2012-01-17       Impact factor: 11.205

Review 5.  The processing of audio-visual speech: empirical and neural bases.

Authors:  Ruth Campbell
Journal:  Philos Trans R Soc Lond B Biol Sci       Date:  2008-03-12       Impact factor: 6.237

6.  Dynamic faces speed up the onset of auditory cortical spiking responses during vocal detection.

Authors:  Chandramouli Chandrasekaran; Luis Lemus; Asif A Ghazanfar
Journal:  Proc Natl Acad Sci U S A       Date:  2013-11-11       Impact factor: 11.205

Review 7.  A model for production, perception, and acquisition of actions in face-to-face communication.

Authors:  Bernd J Kröger; Stefan Kopp; Anja Lowit
Journal:  Cogn Process       Date:  2009-12-10

8.  Talking points: A modulating circle reduces listening effort without improving speech recognition.

Authors:  Julia F Strand; Violet A Brown; Dennis L Barbour
Journal:  Psychon Bull Rev       Date:  2019-02

Review 9.  Early experience and multisensory perceptual narrowing.

Authors:  David J Lewkowicz
Journal:  Dev Psychobiol       Date:  2014-01-16       Impact factor: 3.038

10.  The natural statistics of audiovisual speech.

Authors:  Chandramouli Chandrasekaran; Andrea Trubanova; Sébastien Stillittano; Alice Caplier; Asif A Ghazanfar
Journal:  PLoS Comput Biol       Date:  2009-07-17       Impact factor: 4.475

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.