Literature DB >> 28716965

Neural Tuning to Low-Level Features of Speech throughout the Perisylvian Cortex.

Julia Berezutskaya1,2, Zachary V Freudenburg3, Umut Güçlü2, Marcel A J van Gerven2, Nick F Ramsey3.   

Abstract

Despite a large body of research, we continue to lack a detailed account of how auditory processing of continuous speech unfolds in the human brain. Previous research showed the propagation of low-level acoustic features of speech from posterior superior temporal gyrus toward anterior superior temporal gyrus in the human brain (Hullett et al., 2016). In this study, we investigate what happens to these neural representations past the superior temporal gyrus and how they engage higher-level language processing areas such as inferior frontal gyrus. We used low-level sound features to model neural responses to speech outside of the primary auditory cortex. Two complementary imaging techniques were used with human participants (both males and females): electrocorticography (ECoG) and fMRI. Both imaging techniques showed tuning of the perisylvian cortex to low-level speech features. With ECoG, we found evidence of propagation of the temporal features of speech sounds along the ventral pathway of language processing in the brain toward inferior frontal gyrus. Increasingly coarse temporal features of speech spreading from posterior superior temporal cortex toward inferior frontal gyrus were associated with linguistic features such as voice onset time, duration of the formant transitions, and phoneme, syllable, and word boundaries. The present findings provide the groundwork for a comprehensive bottom-up account of speech comprehension in the human brain.SIGNIFICANCE STATEMENT We know that, during natural speech comprehension, a broad network of perisylvian cortical regions is involved in sound and language processing. Here, we investigated the tuning to low-level sound features within these regions using neural responses to a short feature film. We also looked at whether the tuning organization along these brain regions showed any parallel to the hierarchy of language structures in continuous speech. Our results show that low-level speech features propagate throughout the perisylvian cortex and potentially contribute to the emergence of "coarse" speech representations in inferior frontal gyrus typically associated with high-level language processing. These findings add to the previous work on auditory processing and underline a distinctive role of inferior frontal gyrus in natural speech comprehension.
Copyright © 2017 the authors 0270-6474/17/377906-15$15.00/0.

Entities:  

Keywords:  inferior frontal gyrus; language; modeling; neural encoding; speech comprehension

Mesh:

Year:  2017        PMID: 28716965      PMCID: PMC6596904          DOI: 10.1523/JNEUROSCI.0238-17.2017

Source DB:  PubMed          Journal:  J Neurosci        ISSN: 0270-6474            Impact factor:   6.167


  12 in total

1.  Brain-optimized extraction of complex sound features that drive continuous auditory perception.

Authors:  Julia Berezutskaya; Zachary V Freudenburg; Umut Güçlü; Marcel A J van Gerven; Nick F Ramsey
Journal:  PLoS Comput Biol       Date:  2020-07-02       Impact factor: 4.475

2.  Parallel and distributed encoding of speech across human auditory cortex.

Authors:  Liberty S Hamilton; Yulia Oganian; Jeffery Hall; Edward F Chang
Journal:  Cell       Date:  2021-08-18       Impact factor: 66.850

Review 3.  The Encoding of Speech Sounds in the Superior Temporal Gyrus.

Authors:  Han Gyol Yi; Matthew K Leonard; Edward F Chang
Journal:  Neuron       Date:  2019-06-19       Impact factor: 17.173

4.  Electrophysiology of the Human Superior Temporal Sulcus during Speech Processing.

Authors:  Kirill V Nourski; Mitchell Steinschneider; Ariane E Rhone; Christopher K Kovach; Matthew I Banks; Bryan M Krause; Hiroto Kawasaki; Matthew A Howard
Journal:  Cereb Cortex       Date:  2021-01-05       Impact factor: 4.861

5.  Estimating and interpreting nonlinear receptive field of sensory neural responses with deep neural network models.

Authors:  Menoua Keshishian; Hassan Akbari; Bahar Khalighinejad; Jose L Herrero; Ashesh D Mehta; Nima Mesgarani
Journal:  Elife       Date:  2020-06-26       Impact factor: 8.140

6.  Simple Acoustic Features Can Explain Phoneme-Based Predictions of Cortical Responses to Speech.

Authors:  Christoph Daube; Robin A A Ince; Joachim Gross
Journal:  Curr Biol       Date:  2019-05-23       Impact factor: 10.834

7.  Open multimodal iEEG-fMRI dataset from naturalistic stimulation with a short audiovisual film.

Authors:  Julia Berezutskaya; Mariska J Vansteensel; Erik J Aarnoutse; Zachary V Freudenburg; Giovanni Piantoni; Mariana P Branco; Nick F Ramsey
Journal:  Sci Data       Date:  2022-03-21       Impact factor: 6.444

8.  Cortical network responses map onto data-driven features that capture visual semantics of movie fragments.

Authors:  Julia Berezutskaya; Zachary V Freudenburg; Luca Ambrogioni; Umut Güçlü; Marcel A J van Gerven; Nick F Ramsey
Journal:  Sci Rep       Date:  2020-07-21       Impact factor: 4.379

9.  Spontaneous Neural Activity in the Superior Temporal Gyrus Recapitulates Tuning for Speech Features.

Authors:  Jonathan D Breshears; Liberty S Hamilton; Edward F Chang
Journal:  Front Hum Neurosci       Date:  2018-09-18       Impact factor: 3.169

10.  High-density intracranial recordings reveal a distinct site in anterior dorsal precentral cortex that tracks perceived speech.

Authors:  Julia Berezutskaya; Clarissa Baratin; Zachary V Freudenburg; Nicolas F Ramsey
Journal:  Hum Brain Mapp       Date:  2020-08-03       Impact factor: 5.038

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.