Literature DB >> 33097640

Linguistic Structure and Meaning Organize Neural Oscillations into a Content-Specific Hierarchy.

Greta Kaufeld1, Hans Rutger Bosker1,2, Sanne Ten Oever1,2, Phillip M Alday1, Antje S Meyer1,2, Andrea E Martin3,2.   

Abstract

Neural oscillations track linguistic information during speech comprehension (Ding et al., 2016; Keitel et al., 2018), and are known to be modulated by acoustic landmarks and speech intelligibility (Doelling et al., 2014; Zoefel and VanRullen, 2015). However, studies investigating linguistic tracking have either relied on non-naturalistic isochronous stimuli or failed to fully control for prosody. Therefore, it is still unclear whether low-frequency activity tracks linguistic structure during natural speech, where linguistic structure does not follow such a palpable temporal pattern. Here, we measured electroencephalography (EEG) and manipulated the presence of semantic and syntactic information apart from the timescale of their occurrence, while carefully controlling for the acoustic-prosodic and lexical-semantic information in the signal. EEG was recorded while 29 adult native speakers (22 women, 7 men) listened to naturally spoken Dutch sentences, jabberwocky controls with morphemes and sentential prosody, word lists with lexical content but no phrase structure, and backward acoustically matched controls. Mutual information (MI) analysis revealed sensitivity to linguistic content: MI was highest for sentences at the phrasal (0.8-1.1 Hz) and lexical (1.9-2.8 Hz) timescales, suggesting that the delta-band is modulated by lexically driven combinatorial processing beyond prosody, and that linguistic content (i.e., structure and meaning) organizes neural oscillations beyond the timescale and rhythmicity of the stimulus. This pattern is consistent with neurophysiologically inspired models of language comprehension (Martin, 2016, 2020; Martin and Doumas, 2017) where oscillations encode endogenously generated linguistic content over and above exogenous or stimulus-driven timing and rhythm information.SIGNIFICANCE STATEMENT Biological systems like the brain encode their environment not only by reacting in a series of stimulus-driven responses, but by combining stimulus-driven information with endogenous, internally generated, inferential knowledge and meaning. Understanding language from speech is the human benchmark for this. Much research focuses on the purely stimulus-driven response, but here, we focus on the goal of language behavior: conveying structure and meaning. To that end, we use naturalistic stimuli that contrast acoustic-prosodic and lexical-semantic information to show that, during spoken language comprehension, oscillatory modulations reflect computations related to inferring structure and meaning from the acoustic signal. Our experiment provides the first evidence to date that compositional structure and meaning organize the oscillatory response, above and beyond prosodic and lexical controls.
Copyright © 2020 the authors.

Entities:  

Keywords:  combinatorial processing; lexical semantics; mutual information; neural oscillations; prosody; sentence comprehension

Mesh:

Year:  2020        PMID: 33097640      PMCID: PMC7724143          DOI: 10.1523/JNEUROSCI.0302-20.2020

Source DB:  PubMed          Journal:  J Neurosci        ISSN: 0270-6474            Impact factor:   6.167


  47 in total

1.  Wuggy: a multilingual pseudoword generator.

Authors:  Emmanuel Keuleers; Marc Brysbaert
Journal:  Behav Res Methods       Date:  2010-08

2.  A hierarchy of temporal receptive windows in human cortex.

Authors:  Uri Hasson; Eunice Yang; Ignacio Vallines; David J Heeger; Nava Rubin
Journal:  J Neurosci       Date:  2008-03-05       Impact factor: 6.167

3.  The spectrotemporal filter mechanism of auditory selective attention.

Authors:  Peter Lakatos; Gabriella Musacchia; Monica N O'Connel; Arnaud Y Falchier; Daniel C Javitt; Charles E Schroeder
Journal:  Neuron       Date:  2013-02-20       Impact factor: 17.173

4.  Cue integration with categories: Weighting acoustic cues in speech using unsupervised learning and distributional statistics.

Authors:  Joseph C Toscano; Bob McMurray
Journal:  Cogn Sci       Date:  2010-04

5.  Mechanisms underlying selective neuronal tracking of attended speech at a "cocktail party".

Authors:  Elana M Zion Golumbic; Nai Ding; Stephan Bickel; Peter Lakatos; Catherine A Schevon; Guy M McKhann; Robert R Goodman; Ronald Emerson; Ashesh D Mehta; Jonathan Z Simon; David Poeppel; Charles E Schroeder
Journal:  Neuron       Date:  2013-03-06       Impact factor: 17.173

Review 6.  Normalization as a canonical neural computation.

Authors:  Matteo Carandini; David J Heeger
Journal:  Nat Rev Neurosci       Date:  2011-11-23       Impact factor: 34.870

7.  FieldTrip: Open source software for advanced analysis of MEG, EEG, and invasive electrophysiological data.

Authors:  Robert Oostenveld; Pascal Fries; Eric Maris; Jan-Mathijs Schoffelen
Journal:  Comput Intell Neurosci       Date:  2010-12-23

Review 8.  The Role of High-Level Processes for Oscillatory Phase Entrainment to Speech Sound.

Authors:  Benedikt Zoefel; Rufin VanRullen
Journal:  Front Hum Neurosci       Date:  2015-12-02       Impact factor: 3.169

9.  Auditory cortical delta-entrainment interacts with oscillatory power in multiple fronto-parietal networks.

Authors:  Anne Keitel; Robin A A Ince; Joachim Gross; Christoph Kayser
Journal:  Neuroimage       Date:  2016-11-27       Impact factor: 6.556

10.  Irregular Speech Rate Dissociates Auditory Cortical Entrainment, Evoked Responses, and Frontal Alpha.

Authors:  Stephanie J Kayser; Robin A A Ince; Joachim Gross; Christoph Kayser
Journal:  J Neurosci       Date:  2015-11-04       Impact factor: 6.167

View more
  10 in total

1.  Cortical Processing of Arithmetic and Simple Sentences in an Auditory Attention Task.

Authors:  Joshua P Kulasingham; Neha H Joshi; Mohsen Rezaeizadeh; Jonathan Z Simon
Journal:  J Neurosci       Date:  2021-08-16       Impact factor: 6.167

2.  Neural dynamics differentially encode phrases and sentences during spoken language comprehension.

Authors:  Fan Bai; Antje S Meyer; Andrea E Martin
Journal:  PLoS Biol       Date:  2022-07-14       Impact factor: 9.593

3.  Neural tracking of phrases in spoken language comprehension is automatic and task-dependent.

Authors:  Sanne Ten Oever; Sara Carta; Greta Kaufeld; Andrea E Martin
Journal:  Elife       Date:  2022-07-14       Impact factor: 8.713

4.  An oscillating computational model can track pseudo-rhythmic speech by using linguistic predictions.

Authors:  Sanne Ten Oever; Andrea E Martin
Journal:  Elife       Date:  2021-08-02       Impact factor: 8.140

5.  Implementation of an Online Auditory Attention Detection Model with Electroencephalography in a Dichotomous Listening Experiment.

Authors:  Seung-Cheol Baek; Jae Ho Chung; Yoonseob Lim
Journal:  Sensors (Basel)       Date:  2021-01-13       Impact factor: 3.576

6.  Modulation Spectra Capture EEG Responses to Speech Signals and Drive Distinct Temporal Response Functions.

Authors:  Xiangbin Teng; Qinglin Meng; David Poeppel
Journal:  eNeuro       Date:  2021-01-14

7.  Left posterior temporal cortex is sensitive to syntax within conceptually matched Arabic expressions.

Authors:  Suhail Matar; Julien Dirani; Alec Marantz; Liina Pylkkänen
Journal:  Sci Rep       Date:  2021-03-30       Impact factor: 4.379

8.  Inferring the nature of linguistic computations in the brain.

Authors:  Sanne Ten Oever; Karthikeya Kaushik; Andrea E Martin
Journal:  PLoS Comput Biol       Date:  2022-07-28       Impact factor: 4.779

9.  Overt and implicit prosody contribute to neurophysiological responses previously attributed to grammatical processing.

Authors:  Anastasia Glushko; David Poeppel; Karsten Steinhauer
Journal:  Sci Rep       Date:  2022-08-30       Impact factor: 4.996

10.  Using fuzzy string matching for automated assessment of listener transcripts in speech intelligibility studies.

Authors:  Hans Rutger Bosker
Journal:  Behav Res Methods       Date:  2021-03-10
  10 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.