Literature DB >> 25650106

Visual face-movement sensitive cortex is relevant for auditory-only speech recognition.

Philipp Riedel1, Patrick Ragert2, Stefanie Schelinski3, Stefan J Kiebel4, Katharina von Kriegstein5.   

Abstract

It is commonly assumed that the recruitment of visual areas during audition is not relevant for performing auditory tasks ('auditory-only view'). According to an alternative view, however, the recruitment of visual cortices is thought to optimize auditory-only task performance ('auditory-visual view'). This alternative view is based on functional magnetic resonance imaging (fMRI) studies. These studies have shown, for example, that even if there is only auditory input available, face-movement sensitive areas within the posterior superior temporal sulcus (pSTS) are involved in understanding what is said (auditory-only speech recognition). This is particularly the case when speakers are known audio-visually, that is, after brief voice-face learning. Here we tested whether the left pSTS involvement is causally related to performance in auditory-only speech recognition when speakers are known by face. To test this hypothesis, we applied cathodal transcranial direct current stimulation (tDCS) to the pSTS during (i) visual-only speech recognition of a speaker known only visually to participants and (ii) auditory-only speech recognition of speakers they learned by voice and face. We defined the cathode as active electrode to down-regulate cortical excitability by hyperpolarization of neurons. tDCS to the pSTS interfered with visual-only speech recognition performance compared to a control group without pSTS stimulation (tDCS to BA6/44 or sham). Critically, compared to controls, pSTS stimulation additionally decreased auditory-only speech recognition performance selectively for voice-face learned speakers. These results are important in two ways. First, they provide direct evidence that the pSTS is causally involved in visual-only speech recognition; this confirms a long-standing prediction of current face-processing models. Secondly, they show that visual face-sensitive pSTS is causally involved in optimizing auditory-only speech recognition. These results are in line with the 'auditory-visual view' of auditory speech perception, which assumes that auditory speech recognition is optimized by using predictions from previously encoded speaker-specific audio-visual internal models.
Copyright © 2015 Elsevier Ltd. All rights reserved.

Entities:  

Keywords:  Auditory; Lip-reading; Prediction; Speech; pSTS; tDCS

Mesh:

Year:  2014        PMID: 25650106     DOI: 10.1016/j.cortex.2014.11.016

Source DB:  PubMed          Journal:  Cortex        ISSN: 0010-9452            Impact factor:   4.027


  9 in total

1.  Mouth and Voice: A Relationship between Visual and Auditory Preference in the Human Superior Temporal Sulcus.

Authors:  Lin L Zhu; Michael S Beauchamp
Journal:  J Neurosci       Date:  2017-02-08       Impact factor: 6.167

2.  Different neural processes underlie visual speech perception in school-age children and adults: An event-related potentials study.

Authors:  Natalya Kaganovich; Elizabeth Ancel
Journal:  J Exp Child Psychol       Date:  2019-04-20

3.  The phase of cortical oscillations determines the perceptual fate of visual cues in naturalistic audiovisual speech.

Authors:  Raphaël Thézé; Anne-Lise Giraud; Pierre Mégevand
Journal:  Sci Adv       Date:  2020-11-04       Impact factor: 14.136

4.  Contributions of local speech encoding and functional connectivity to audio-visual speech perception.

Authors:  Bruno L Giordano; Robin A A Ince; Joachim Gross; Philippe G Schyns; Stefano Panzeri; Christoph Kayser
Journal:  Elife       Date:  2017-06-07       Impact factor: 8.140

5.  The Efficacy of Short-term Gated Audiovisual Speech Training for Improving Auditory Sentence Identification in Noise in Elderly Hearing Aid Users.

Authors:  Shahram Moradi; Anna Wahlin; Mathias Hällgren; Jerker Rönnberg; Björn Lidestam
Journal:  Front Psychol       Date:  2017-03-13

6.  Perceptual Doping: An Audiovisual Facilitation Effect on Auditory Speech Processing, From Phonetic Feature Extraction to Sentence Identification in Noise.

Authors:  Shahram Moradi; Björn Lidestam; Elaine Hoi Ning Ng; Henrik Danielsson; Jerker Rönnberg
Journal:  Ear Hear       Date:  2019 Mar/Apr       Impact factor: 3.570

7.  A flexible workflow for simulating transcranial electric stimulation in healthy and lesioned brains.

Authors:  Benjamin Kalloch; Pierre-Louis Bazin; Arno Villringer; Bernhard Sehm; Mario Hlawitschka
Journal:  PLoS One       Date:  2020-05-14       Impact factor: 3.240

8.  Transcranial electric stimulation for the investigation of speech perception and comprehension.

Authors:  Benedikt Zoefel; Matthew H Davis
Journal:  Lang Cogn Neurosci       Date:  2016-11-01       Impact factor: 2.331

9.  Visual mechanisms for voice-identity recognition flexibly adjust to auditory noise level.

Authors:  Corrina Maguinness; Katharina von Kriegstein
Journal:  Hum Brain Mapp       Date:  2021-05-27       Impact factor: 5.038

  9 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.