Literature DB >> 1348140

Lipreading and audio-visual speech perception.

Q Summerfield1.   

Abstract

This paper reviews progress in understanding the psychology of lipreading and audio-visual speech perception. It considers four questions. What distinguishes better from poorer lipreaders? What are the effects of introducing a delay between the acoustical and optical speech signals? What have attempts to produce computer animations of talking faces contributed to our understanding of the visual cues that distinguish consonants and vowels? Finally, how should the process of audio-visual integration in speech perception be described; that is, how are the sights and sounds of talking faces represented at their conflux?

Mesh:

Year:  1992        PMID: 1348140     DOI: 10.1098/rstb.1992.0009

Source DB:  PubMed          Journal:  Philos Trans R Soc Lond B Biol Sci        ISSN: 0962-8436            Impact factor:   6.237


  54 in total

1.  The perception of visible speech: estimation of speech rate and detection of time reversals.

Authors:  Paolo Viviani; Francesca Figliozzi; Francesco Lacquaniti
Journal:  Exp Brain Res       Date:  2011-10-11       Impact factor: 1.972

2.  The temporal window of audio-tactile integration in speech perception.

Authors:  Bryan Gick; Yoko Ikegami; Donald Derrick
Journal:  J Acoust Soc Am       Date:  2010-11       Impact factor: 1.840

Review 3.  Temporal context in speech processing and attentional stream selection: a behavioral and neural perspective.

Authors:  Elana M Zion Golumbic; David Poeppel; Charles E Schroeder
Journal:  Brain Lang       Date:  2012-01-29       Impact factor: 2.381

4.  Test-retest reliability in fMRI of language: group and task effects.

Authors:  E Elinor Chen; Steven L Small
Journal:  Brain Lang       Date:  2006-06-05       Impact factor: 2.381

5.  Automatic audiovisual integration in speech perception.

Authors:  Maurizio Gentilucci; Luigi Cattaneo
Journal:  Exp Brain Res       Date:  2005-10-29       Impact factor: 1.972

6.  Performance enhancement for audio-visual speaker identification using dynamic facial muscle model.

Authors:  Vahid Asadpour; Farzad Towhidkhah; Mohammad Mehdi Homayounpour
Journal:  Med Biol Eng Comput       Date:  2006-09-26       Impact factor: 2.602

7.  Evidence that cochlear-implanted deaf patients are better multisensory integrators.

Authors:  J Rouger; S Lagleyre; B Fraysse; S Deneve; O Deguine; P Barone
Journal:  Proc Natl Acad Sci U S A       Date:  2007-04-02       Impact factor: 11.205

8.  Dynamic faces speed up the onset of auditory cortical spiking responses during vocal detection.

Authors:  Chandramouli Chandrasekaran; Luis Lemus; Asif A Ghazanfar
Journal:  Proc Natl Acad Sci U S A       Date:  2013-11-11       Impact factor: 11.205

9.  The redeployment of attention to the mouth of a talking face during the second year of life.

Authors:  Anne Hillairet de Boisferon; Amy H Tift; Nicholas J Minar; David J Lewkowicz
Journal:  J Exp Child Psychol       Date:  2018-04-05

10.  Shared and modality-specific brain regions that mediate auditory and visual word comprehension.

Authors:  Anne Keitel; Joachim Gross; Christoph Kayser
Journal:  Elife       Date:  2020-08-24       Impact factor: 8.140

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.