Literature DB >> 26412129

Low-Frequency Cortical Entrainment to Speech Reflects Phoneme-Level Processing.

Giovanni M Di Liberto1, James A O'Sullivan2, Edmund C Lalor3.   

Abstract

The human ability to understand speech is underpinned by a hierarchical auditory system whose successive stages process increasingly complex attributes of the acoustic input. It has been suggested that to produce categorical speech perception, this system must elicit consistent neural responses to speech tokens (e.g., phonemes) despite variations in their acoustics. Here, using electroencephalography (EEG), we provide evidence for this categorical phoneme-level speech processing by showing that the relationship between continuous speech and neural activity is best described when that speech is represented using both low-level spectrotemporal information and categorical labeling of phonetic features. Furthermore, the mapping between phonemes and EEG becomes more discriminative for phonetic features at longer latencies, in line with what one might expect from a hierarchical system. Importantly, these effects are not seen for time-reversed speech. These findings may form the basis for future research on natural language processing in specific cohorts of interest and for broader insights into how brains transform acoustic input into meaning.
Copyright © 2015 Elsevier Ltd. All rights reserved.

Entities:  

Mesh:

Year:  2015        PMID: 26412129     DOI: 10.1016/j.cub.2015.08.030

Source DB:  PubMed          Journal:  Curr Biol        ISSN: 0960-9822            Impact factor:   10.834


  94 in total

1.  Rapid Transformation from Auditory to Linguistic Representations of Continuous Speech.

Authors:  Christian Brodbeck; L Elliot Hong; Jonathan Z Simon
Journal:  Curr Biol       Date:  2018-11-29       Impact factor: 10.834

2.  Neural responses to natural and model-matched stimuli reveal distinct computations in primary and nonprimary auditory cortex.

Authors:  Sam V Norman-Haignere; Josh H McDermott
Journal:  PLoS Biol       Date:  2018-12-03       Impact factor: 8.029

Review 3.  Machine Learning Approaches to Analyze Speech-Evoked Neurophysiological Responses.

Authors:  Zilong Xie; Rachel Reetzke; Bharath Chandrasekaran
Journal:  J Speech Lang Hear Res       Date:  2019-03-25       Impact factor: 2.297

4.  Cortical encoding of acoustic and linguistic rhythms in spoken narratives.

Authors:  Cheng Luo; Nai Ding
Journal:  Elife       Date:  2020-12-21       Impact factor: 8.140

5.  MEG Intersubject Phase Locking of Stimulus-Driven Activity during Naturalistic Speech Listening Correlates with Musical Training.

Authors:  Sebastian Puschmann; Mor Regev; Sylvain Baillet; Robert J Zatorre
Journal:  J Neurosci       Date:  2021-02-03       Impact factor: 6.167

6.  Neural tracking of attended versus ignored speech is differentially affected by hearing loss.

Authors:  Eline Borch Petersen; Malte Wöstmann; Jonas Obleser; Thomas Lunner
Journal:  J Neurophysiol       Date:  2016-10-05       Impact factor: 2.714

7.  High-frequency neural activity predicts word parsing in ambiguous speech streams.

Authors:  Anne Kösem; Anahita Basirat; Leila Azizi; Virginie van Wassenhove
Journal:  J Neurophysiol       Date:  2016-09-07       Impact factor: 2.714

8.  Semantic Context Enhances the Early Auditory Encoding of Natural Speech.

Authors:  Michael P Broderick; Andrew J Anderson; Edmund C Lalor
Journal:  J Neurosci       Date:  2019-08-01       Impact factor: 6.167

9.  An oscillating computational model can track pseudo-rhythmic speech by using linguistic predictions.

Authors:  Sanne Ten Oever; Andrea E Martin
Journal:  Elife       Date:  2021-08-02       Impact factor: 8.140

10.  Attention selectively modulates cortical entrainment in different regions of the speech spectrum.

Authors:  Lucas S Baltzell; Cort Horton; Yi Shen; Virginia M Richards; Michael D'Zmura; Ramesh Srinivasan
Journal:  Brain Res       Date:  2016-05-16       Impact factor: 3.252

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.