Literature DB >> 24434677

A neural mechanism for recognizing speech spoken by different speakers.

Jens Kreitewolf1, Etienne Gaudrain2, Katharina von Kriegstein3.   

Abstract

Understanding speech from different speakers is a sophisticated process, particularly because the same acoustic parameters convey important information about both the speech message and the person speaking. How the human brain accomplishes speech recognition under such conditions is unknown. One view is that speaker information is discarded at early processing stages and not used for understanding the speech message. An alternative view is that speaker information is exploited to improve speech recognition. Consistent with the latter view, previous research identified functional interactions between the left- and the right-hemispheric superior temporal sulcus/gyrus, which process speech- and speaker-specific vocal tract parameters, respectively. Vocal tract parameters are one of the two major acoustic features that determine both speaker identity and speech message (phonemes). Here, using functional magnetic resonance imaging (fMRI), we show that a similar interaction exists for glottal fold parameters between the left and right Heschl's gyri. Glottal fold parameters are the other main acoustic feature that determines speaker identity and speech message (linguistic prosody). The findings suggest that interactions between left- and right-hemispheric areas are specific to the processing of different acoustic features of speech and speaker, and that they represent a general neural mechanism when understanding speech from different speakers.
Copyright © 2014 Elsevier Inc. All rights reserved.

Entities:  

Keywords:  Glottal fold; Heschl's gyrus; Linguistic prosody; Voice; fMRI

Mesh:

Substances:

Year:  2014        PMID: 24434677     DOI: 10.1016/j.neuroimage.2014.01.005

Source DB:  PubMed          Journal:  Neuroimage        ISSN: 1053-8119            Impact factor:   6.556


  12 in total

1.  Voice-sensitive brain networks encode talker-specific phonetic detail.

Authors:  Emily B Myers; Rachel M Theodore
Journal:  Brain Lang       Date:  2016-11-27       Impact factor: 2.381

2.  Training-induced brain activation and functional connectivity differentiate multi-talker and single-talker speech training.

Authors:  Zhizhou Deng; Bharath Chandrasekaran; Suiping Wang; Patrick C M Wong
Journal:  Neurobiol Learn Mem       Date:  2018-03-10       Impact factor: 2.877

3.  Functionally integrated neural processing of linguistic and talker information: An event-related fMRI and ERP study.

Authors:  Caicai Zhang; Kenneth R Pugh; W Einar Mencl; Peter J Molfese; Stephen J Frost; James S Magnuson; Gang Peng; William S-Y Wang
Journal:  Neuroimage       Date:  2015-09-04       Impact factor: 6.556

4.  Surface-Based Morphometry of Cortical Thickness and Surface Area Associated with Heschl's Gyri Duplications in 430 Healthy Volunteers.

Authors:  Damien Marie; Sophie Maingault; Fabrice Crivello; Bernard Mazoyer; Nathalie Tzourio-Mazoyer
Journal:  Front Hum Neurosci       Date:  2016-03-07       Impact factor: 3.169

5.  Implicit Talker Training Improves Comprehension of Auditory Speech in Noise.

Authors:  Jens Kreitewolf; Samuel R Mathias; Katharina von Kriegstein
Journal:  Front Psychol       Date:  2017-09-14

6.  Speaker-normalized sound representations in the human auditory cortex.

Authors:  Matthias J Sjerps; Neal P Fox; Keith Johnson; Edward F Chang
Journal:  Nat Commun       Date:  2019-06-05       Impact factor: 14.919

7.  The Relation Between Vocal Pitch and Vocal Emotion Recognition Abilities in People with Autism Spectrum Disorder and Typical Development.

Authors:  Stefanie Schelinski; Katharina von Kriegstein
Journal:  J Autism Dev Disord       Date:  2019-01

8.  Modulation of the Primary Auditory Thalamus When Recognizing Speech with Background Noise.

Authors:  Paul Glad Mihai; Nadja Tschentscher; Katharina von Kriegstein
Journal:  J Neurosci       Date:  2021-07-09       Impact factor: 6.167

9.  Temporal voice areas exist in autism spectrum disorder but are dysfunctional for voice identity recognition.

Authors:  Stefanie Schelinski; Kamila Borowiak; Katharina von Kriegstein
Journal:  Soc Cogn Affect Neurosci       Date:  2016-06-30       Impact factor: 3.436

10.  Development of voice perception is dissociated across gender cues in school-age children.

Authors:  Leanne Nagels; Etienne Gaudrain; Deborah Vickers; Petra Hendriks; Deniz Başkent
Journal:  Sci Rep       Date:  2020-03-19       Impact factor: 4.379

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.