Literature DB >> 30048749

Cortical tracking of multiple streams outside the focus of attention in naturalistic auditory scenes.

Lars Hausfeld1, Lars Riecke2, Giancarlo Valente2, Elia Formisano3.   

Abstract

In everyday life, we process mixtures of a variety of sounds. This processing involves the segregation of auditory input and the attentive selection of the stream that is most relevant to current goals. For natural scenes with multiple irrelevant sounds, however, it is unclear how the human auditory system represents all the unattended sounds. In particular, it remains elusive whether the sensory input to the human auditory cortex of unattended sounds biases the cortical integration/segregation of these sounds in a similar way as for attended sounds. In this study, we tested this by asking participants to selectively listen to one of two speakers or music in an ongoing 1-min sound mixture while their cortical neural activity was measured with EEG. Using a stimulus reconstruction approach, we find better reconstruction of mixed unattended sounds compared to individual unattended sounds at two early cortical stages (70 ms and 150 ms) of the auditory processing hierarchy. Crucially, at the earlier processing stage (70 ms), this cortical bias to represent unattended sounds as integrated rather than segregated increases with increasing similarity of the unattended sounds. Our results reveal an important role of acoustical properties for the cortical segregation of unattended auditory streams in natural listening situations. They further corroborate the notion that selective attention contributes functionally to cortical stream segregation. These findings highlight that a common, acoustics-based grouping principle governs the cortical representation of auditory streams not only inside but also outside the listener's focus of attention.
Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

Entities:  

Keywords:  Audition; EEG; Selective attention; Sound reconstruction; Unattended processing

Mesh:

Year:  2018        PMID: 30048749     DOI: 10.1016/j.neuroimage.2018.07.052

Source DB:  PubMed          Journal:  Neuroimage        ISSN: 1053-8119            Impact factor:   6.556


  7 in total

1.  Generalizable EEG Encoding Models with Naturalistic Audiovisual Stimuli.

Authors:  Maansi Desai; Jade Holder; Cassandra Villarreal; Nat Clark; Brittany Hoang; Liberty S Hamilton
Journal:  J Neurosci       Date:  2021-09-09       Impact factor: 6.167

2.  Paying attention to speech: The role of working memory capacity and professional experience.

Authors:  Bar Lambez; Galit Agmon; Paz Har-Shai Yahav; Yuri Rassovsky; Elana Zion Golumbic
Journal:  Atten Percept Psychophys       Date:  2020-10       Impact factor: 2.199

3.  Selective auditory attention within naturalistic scenes modulates reactivity to speech sounds.

Authors:  Hanna Renvall; Jaeho Seol; Riku Tuominen; Bettina Sorger; Lars Riecke; Riitta Salmelin
Journal:  Eur J Neurosci       Date:  2021-11-03       Impact factor: 3.698

4.  Listening in complex acoustic scenes.

Authors:  Andrew J King; Kerry Mm Walker
Journal:  Curr Opin Physiol       Date:  2020-09-08

5.  Neural Representation Enhanced for Speech and Reduced for Background Noise With a Hearing Aid Noise Reduction Scheme During a Selective Attention Task.

Authors:  Emina Alickovic; Thomas Lunner; Dorothea Wendt; Lorenz Fiedler; Renskje Hietkamp; Elaine Hoi Ning Ng; Carina Graversen
Journal:  Front Neurosci       Date:  2020-09-10       Impact factor: 4.677

6.  Neural speech restoration at the cocktail party: Auditory cortex recovers masked speech of both attended and ignored speakers.

Authors:  Christian Brodbeck; Alex Jiao; L Elliot Hong; Jonathan Z Simon
Journal:  PLoS Biol       Date:  2020-10-22       Impact factor: 8.029

7.  Categorizing human vocal signals depends on an integrated auditory-frontal cortical network.

Authors:  Claudia Roswandowitz; Huw Swanborough; Sascha Frühholz
Journal:  Hum Brain Mapp       Date:  2020-12-08       Impact factor: 5.038

  7 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.