Literature DB >> 18436648

Simulation of talking faces in the human brain improves auditory speech recognition.

Katharina von Kriegstein1, Ozgür Dogan, Martina Grüter, Anne-Lise Giraud, Christian A Kell, Thomas Grüter, Andreas Kleinschmidt, Stefan J Kiebel.   

Abstract

Human face-to-face communication is essentially audiovisual. Typically, people talk to us face-to-face, providing concurrent auditory and visual input. Understanding someone is easier when there is visual input, because visual cues like mouth and tongue movements provide complementary information about speech content. Here, we hypothesized that, even in the absence of visual input, the brain optimizes both auditory-only speech and speaker recognition by harvesting speaker-specific predictions and constraints from distinct visual face-processing areas. To test this hypothesis, we performed behavioral and neuroimaging experiments in two groups: subjects with a face recognition deficit (prosopagnosia) and matched controls. The results show that observing a specific person talking for 2 min improves subsequent auditory-only speech and speaker recognition for this person. In both prosopagnosics and controls, behavioral improvement in auditory-only speech recognition was based on an area typically involved in face-movement processing. Improvement in speaker recognition was only present in controls and was based on an area involved in face-identity processing. These findings challenge current unisensory models of speech processing, because they show that, in auditory-only speech, the brain exploits previously encoded audiovisual correlations to optimize communication. We suggest that this optimization is based on speaker-specific audiovisual internal models, which are used to simulate a talking face.

Entities:  

Mesh:

Year:  2008        PMID: 18436648      PMCID: PMC2365564          DOI: 10.1073/pnas.0710826105

Source DB:  PubMed          Journal:  Proc Natl Acad Sci U S A        ISSN: 0027-8424            Impact factor:   11.205


  34 in total

1.  Recognizing moving faces: a psychological and neural synthesis.

Authors:  Alice J. O'Toole; Dana A. Roark; Hervé Abdi
Journal:  Trends Cogn Sci       Date:  2002-06-01       Impact factor: 20.229

2.  Visual speech speeds up the neural processing of auditory speech.

Authors:  Virginie van Wassenhove; Ken W Grant; David Poeppel
Journal:  Proc Natl Acad Sci U S A       Date:  2005-01-12       Impact factor: 11.205

3.  First report of prevalence of non-syndromic hereditary prosopagnosia (HPA).

Authors:  Ingo Kennerknecht; Thomas Grueter; Brigitte Welling; Sebastian Wentzek; Jürgen Horst; Steve Edwards; Martina Grueter
Journal:  Am J Med Genet A       Date:  2006-08-01       Impact factor: 2.802

4.  Exploring the role of characteristic motion when learning new faces.

Authors:  Karen Lander; Rebecca Davies
Journal:  Q J Exp Psychol (Hove)       Date:  2007-04       Impact factor: 2.143

5.  Crossmodal integration in the identification of consonant segments.

Authors:  L D Braida
Journal:  Q J Exp Psychol A       Date:  1991-08

6.  A detailed investigation of facial expression processing in congenital prosopagnosia as compared to acquired prosopagnosia.

Authors:  Kate Humphreys; Galia Avidan; Marlene Behrmann
Journal:  Exp Brain Res       Date:  2007-01       Impact factor: 1.972

7.  An internal model for sensorimotor integration.

Authors:  D M Wolpert; Z Ghahramani; M I Jordan
Journal:  Science       Date:  1995-09-29       Impact factor: 47.728

8.  Parallel visual computation.

Authors:  D H Ballard; G E Hinton; T J Sejnowski
Journal:  Nature       Date:  1983 Nov 3-9       Impact factor: 49.962

9.  Hereditary prosopagnosia: the first case series.

Authors:  Martina Grueter; Thomas Grueter; Vaughan Bell; Juergen Horst; Wolfgang Laskowski; Karl Sperling; Peter W Halligan; Hadyn D Ellis; Ingo Kennerknecht
Journal:  Cortex       Date:  2007-08       Impact factor: 4.027

10.  Faces as objects of non-expertise: processing of thatcherised faces in congenital prosopagnosia.

Authors:  Claus-Christian Carbon; Thomas Grüter; Joachim E Weber; Andreas Lueschow
Journal:  Perception       Date:  2007       Impact factor: 1.490

View more
  37 in total

1.  Can you McGurk yourself? Self-face and self-voice in audiovisual speech.

Authors:  Christopher Aruffo; David I Shore
Journal:  Psychon Bull Rev       Date:  2012-02

2.  Dynamic changes in superior temporal sulcus connectivity during perception of noisy audiovisual speech.

Authors:  Audrey R Nath; Michael S Beauchamp
Journal:  J Neurosci       Date:  2011-02-02       Impact factor: 6.167

3.  Activation in the angular gyrus and in the pSTS is modulated by face primes during voice recognition.

Authors:  Cordula Hölig; Julia Föcker; Anna Best; Brigitte Röder; Christian Büchel
Journal:  Hum Brain Mapp       Date:  2017-02-20       Impact factor: 5.038

Review 4.  Facial expressions and the evolution of the speech rhythm.

Authors:  Asif A Ghazanfar; Daniel Y Takahashi
Journal:  J Cogn Neurosci       Date:  2014-01-23       Impact factor: 3.225

5.  Brain systems mediating voice identity processing in blind humans.

Authors:  Cordula Hölig; Julia Föcker; Anna Best; Brigitte Röder; Christian Büchel
Journal:  Hum Brain Mapp       Date:  2014-03-17       Impact factor: 5.038

6.  On the same wavelength: predictable language enhances speaker-listener brain-to-brain synchrony in posterior superior temporal gyrus.

Authors:  Suzanne Dikker; Lauren J Silbert; Uri Hasson; Jason D Zevin
Journal:  J Neurosci       Date:  2014-04-30       Impact factor: 6.167

7.  Fusiform gyrus face selectivity relates to individual differences in facial recognition ability.

Authors:  Nicholas Furl; Lúcia Garrido; Raymond J Dolan; Jon Driver; Bradley Duchaine
Journal:  J Cogn Neurosci       Date:  2010-07-09       Impact factor: 3.225

8.  The natural statistics of audiovisual speech.

Authors:  Chandramouli Chandrasekaran; Andrea Trubanova; Sébastien Stillittano; Alice Caplier; Asif A Ghazanfar
Journal:  PLoS Comput Biol       Date:  2009-07-17       Impact factor: 4.475

9.  Voxel-based morphometry reveals reduced grey matter volume in the temporal cortex of developmental prosopagnosics.

Authors:  Lúcia Garrido; Nicholas Furl; Bogdan Draganski; Nikolaus Weiskopf; John Stevens; Geoffrey Chern-Yee Tan; Jon Driver; Ray J Dolan; Bradley Duchaine
Journal:  Brain       Date:  2009-12       Impact factor: 13.501

10.  Consistency and variability in functional localisers.

Authors:  Keith J Duncan; Chotiga Pattamadilok; Iris Knierim; Joseph T Devlin
Journal:  Neuroimage       Date:  2009-03-14       Impact factor: 6.556

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.