Literature DB >> 32614826

Brain-optimized extraction of complex sound features that drive continuous auditory perception.

Julia Berezutskaya1,2, Zachary V Freudenburg1, Umut Güçlü2, Marcel A J van Gerven2, Nick F Ramsey1.   

Abstract

Understanding how the human brain processes auditory input remains a challenge. Traditionally, a distinction between lower- and higher-level sound features is made, but their definition depends on a specific theoretical framework and might not match the neural representation of sound. Here, we postulate that constructing a data-driven neural model of auditory perception, with a minimum of theoretical assumptions about the relevant sound features, could provide an alternative approach and possibly a better match to the neural responses. We collected electrocorticography recordings from six patients who watched a long-duration feature film. The raw movie soundtrack was used to train an artificial neural network model to predict the associated neural responses. The model achieved high prediction accuracy and generalized well to a second dataset, where new participants watched a different film. The extracted bottom-up features captured acoustic properties that were specific to the type of sound and were associated with various response latency profiles and distinct cortical distributions. Specifically, several features encoded speech-related acoustic properties with some features exhibiting shorter latency profiles (associated with responses in posterior perisylvian cortex) and others exhibiting longer latency profiles (associated with responses in anterior perisylvian cortex). Our results support and extend the current view on speech perception by demonstrating the presence of temporal hierarchies in the perisylvian cortex and involvement of cortical sites outside of this region during audiovisual speech perception.

Entities:  

Mesh:

Year:  2020        PMID: 32614826      PMCID: PMC7363106          DOI: 10.1371/journal.pcbi.1007992

Source DB:  PubMed          Journal:  PLoS Comput Biol        ISSN: 1553-734X            Impact factor:   4.475


  43 in total

1.  Discovering Event Structure in Continuous Narrative Perception and Memory.

Authors:  Christopher Baldassano; Janice Chen; Asieh Zadbood; Jonathan W Pillow; Uri Hasson; Kenneth A Norman
Journal:  Neuron       Date:  2017-08-02       Impact factor: 17.173

2.  Multiresolution spectrotemporal analysis of complex sounds.

Authors:  Taishih Chi; Powen Ru; Shihab A Shamma
Journal:  J Acoust Soc Am       Date:  2005-08       Impact factor: 1.840

3.  Neural Tuning to Low-Level Features of Speech throughout the Perisylvian Cortex.

Authors:  Julia Berezutskaya; Zachary V Freudenburg; Umut Güçlü; Marcel A J van Gerven; Nick F Ramsey
Journal:  J Neurosci       Date:  2017-07-17       Impact factor: 6.167

4.  An automated labeling system for subdividing the human cerebral cortex on MRI scans into gyral based regions of interest.

Authors:  Rahul S Desikan; Florent Ségonne; Bruce Fischl; Brian T Quinn; Bradford C Dickerson; Deborah Blacker; Randy L Buckner; Anders M Dale; R Paul Maguire; Bradley T Hyman; Marilyn S Albert; Ronald J Killiany
Journal:  Neuroimage       Date:  2006-03-10       Impact factor: 6.556

Review 5.  Evidence for a distributed hierarchy of action representation in the brain.

Authors:  Scott T Grafton; Antonia F de C Hamilton
Journal:  Hum Mov Sci       Date:  2007-08-13       Impact factor: 2.161

Review 6.  Cortical processing of complex sounds.

Authors:  J P Rauschecker
Journal:  Curr Opin Neurobiol       Date:  1998-08       Impact factor: 6.627

7.  Intermodal attention modulates visual processing in dorsal and ventral streams.

Authors:  A D Cate; T J Herron; X Kang; E W Yund; D L Woods
Journal:  Neuroimage       Date:  2012-08-16       Impact factor: 6.556

8.  Hearing silences: human auditory processing relies on preactivation of sound-specific brain activity patterns.

Authors:  Iria SanMiguel; Andreas Widmann; Alexandra Bendixen; Nelson Trujillo-Barreto; Erich Schröger
Journal:  J Neurosci       Date:  2013-05-15       Impact factor: 6.167

9.  Human Superior Temporal Gyrus Organization of Spectrotemporal Modulation Tuning Derived from Speech Stimuli.

Authors:  Patrick W Hullett; Liberty S Hamilton; Nima Mesgarani; Christoph E Schreiner; Edward F Chang
Journal:  J Neurosci       Date:  2016-02-10       Impact factor: 6.167

10.  Recurrence is required to capture the representational dynamics of the human visual system.

Authors:  Tim C Kietzmann; Courtney J Spoerer; Lynn K A Sörensen; Radoslaw M Cichy; Olaf Hauk; Nikolaus Kriegeskorte
Journal:  Proc Natl Acad Sci U S A       Date:  2019-10-07       Impact factor: 11.205

View more
  3 in total

1.  Multifractal test for nonlinearity of interactions across scales in time series.

Authors:  Damian G Kelty-Stephen; Elizabeth Lane; Lauren Bloomfield; Madhur Mangalam
Journal:  Behav Res Methods       Date:  2022-07-19

2.  Perceiving and remembering speech depend on multifractal nonlinearity in movements producing and exploring speech.

Authors:  Lauren Bloomfield; Elizabeth Lane; Madhur Mangalam; Damian G Kelty-Stephen
Journal:  J R Soc Interface       Date:  2021-08-04       Impact factor: 4.293

3.  Open multimodal iEEG-fMRI dataset from naturalistic stimulation with a short audiovisual film.

Authors:  Julia Berezutskaya; Mariska J Vansteensel; Erik J Aarnoutse; Zachary V Freudenburg; Giovanni Piantoni; Mariana P Branco; Nick F Ramsey
Journal:  Sci Data       Date:  2022-03-21       Impact factor: 6.444

  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.