Literature DB >> 34534211

Envelope reconstruction of speech and music highlights stronger tracking of speech at low frequencies.

Nathaniel J Zuk1,2,3,4,5, Jeremy W Murphy1, Richard B Reilly2,3,6, Edmund C Lalor1,4,5.   

Abstract

The human brain tracks amplitude fluctuations of both speech and music, which reflects acoustic processing in addition to the encoding of higher-order features and one's cognitive state. Comparing neural tracking of speech and music envelopes can elucidate stimulus-general mechanisms, but direct comparisons are confounded by differences in their envelope spectra. Here, we use a novel method of frequency-constrained reconstruction of stimulus envelopes using EEG recorded during passive listening. We expected to see music reconstruction match speech in a narrow range of frequencies, but instead we found that speech was reconstructed better than music for all frequencies we examined. Additionally, models trained on all stimulus types performed as well or better than the stimulus-specific models at higher modulation frequencies, suggesting a common neural mechanism for tracking speech and music. However, speech envelope tracking at low frequencies, below 1 Hz, was associated with increased weighting over parietal channels, which was not present for the other stimuli. Our results highlight the importance of low-frequency speech tracking and suggest an origin from speech-specific processing in the brain.

Entities:  

Mesh:

Year:  2021        PMID: 34534211      PMCID: PMC8480853          DOI: 10.1371/journal.pcbi.1009358

Source DB:  PubMed          Journal:  PLoS Comput Biol        ISSN: 1553-734X            Impact factor:   4.475


  74 in total

1.  Low-Frequency Cortical Entrainment to Speech Reflects Phoneme-Level Processing.

Authors:  Giovanni M Di Liberto; James A O'Sullivan; Edmund C Lalor
Journal:  Curr Biol       Date:  2015-09-24       Impact factor: 10.834

2.  Music as a scaffold for listening to speech: Better neural phase-locking to song than speech.

Authors:  Christina M Vanden Bosch der Nederlanden; Marc F Joanisse; Jessica A Grahn
Journal:  Neuroimage       Date:  2020-03-23       Impact factor: 6.556

3.  A corticostriatal neural system enhances auditory perception through temporal context processing.

Authors:  Eveline Geiser; Michael Notter; John D E Gabrieli
Journal:  J Neurosci       Date:  2012-05-02       Impact factor: 6.167

4.  Auditory steady-state responses as neural correlates of loudness growth.

Authors:  Maaike Van Eeckhoutte; Jan Wouters; Tom Francart
Journal:  Hear Res       Date:  2016-09-29       Impact factor: 3.208

5.  Mechanisms underlying selective neuronal tracking of attended speech at a "cocktail party".

Authors:  Elana M Zion Golumbic; Nai Ding; Stephan Bickel; Peter Lakatos; Catherine A Schevon; Guy M McKhann; Robert R Goodman; Ronald Emerson; Ashesh D Mehta; Jonathan Z Simon; David Poeppel; Charles E Schroeder
Journal:  Neuron       Date:  2013-03-06       Impact factor: 17.173

6.  Human auditory steady state responses: effects of intensity and frequency.

Authors:  R Rodriguez; T Picton; D Linden; G Hamel; G Laframboise
Journal:  Ear Hear       Date:  1986-10       Impact factor: 3.570

7.  EEG-based classification of natural sounds reveals specialized responses to speech and music.

Authors:  Nathaniel J Zuk; Emily S Teoh; Edmund C Lalor
Journal:  Neuroimage       Date:  2020-01-18       Impact factor: 6.556

Review 8.  Cortical entrainment to continuous speech: functional roles and interpretations.

Authors:  Nai Ding; Jonathan Z Simon
Journal:  Front Hum Neurosci       Date:  2014-05-28       Impact factor: 3.169

9.  Auditory Brainstem Responses to Continuous Natural Speech in Human Listeners.

Authors:  Ross K Maddox; Adrian K C Lee
Journal:  eNeuro       Date:  2018-02-09

10.  An oscillator model better predicts cortical entrainment to music.

Authors:  Keith B Doelling; M Florencia Assaneo; Dana Bevilacqua; Bijan Pesaran; David Poeppel
Journal:  Proc Natl Acad Sci U S A       Date:  2019-04-24       Impact factor: 11.205

View more
  3 in total

1.  Neural synchronization is strongest to the spectral flux of slow music and depends on familiarity and beat salience.

Authors:  Kristin Weineck; Olivia Xin Wen; Molly J Henry
Journal:  Elife       Date:  2022-09-12       Impact factor: 8.713

2.  Editorial: Neural Tracking: Closing the Gap Between Neurophysiology and Translational Medicine.

Authors:  Giovanni M Di Liberto; Jens Hjortkjær; Nima Mesgarani
Journal:  Front Neurosci       Date:  2022-03-16       Impact factor: 5.152

3.  MEG Activity in Visual and Auditory Cortices Represents Acoustic Speech-Related Information during Silent Lip Reading.

Authors:  Felix Bröhl; Anne Keitel; Christoph Kayser
Journal:  eNeuro       Date:  2022-06-27
  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.