Literature DB >> 28412441

Noise-robust cortical tracking of attended speech in real-world acoustic scenes.

Søren Asp Fuglsang1, Torsten Dau2, Jens Hjortkjær3.   

Abstract

Selectively attending to one speaker in a multi-speaker scenario is thought to synchronize low-frequency cortical activity to the attended speech signal. In recent studies, reconstruction of speech from single-trial electroencephalogram (EEG) data has been used to decode which talker a listener is attending to in a two-talker situation. It is currently unclear how this generalizes to more complex sound environments. Behaviorally, speech perception is robust to the acoustic distortions that listeners typically encounter in everyday life, but it is unknown whether this is mirrored by a noise-robust neural tracking of attended speech. Here we used advanced acoustic simulations to recreate real-world acoustic scenes in the laboratory. In virtual acoustic realities with varying amounts of reverberation and number of interfering talkers, listeners selectively attended to the speech stream of a particular talker. Across the different listening environments, we found that the attended talker could be accurately decoded from single-trial EEG data irrespective of the different distortions in the acoustic input. For highly reverberant environments, speech envelopes reconstructed from neural responses to the distorted stimuli resembled the original clean signal more than the distorted input. With reverberant speech, we observed a late cortical response to the attended speech stream that encoded temporal modulations in the speech signal without its reverberant distortion. Single-trial attention decoding accuracies based on 40-50s long blocks of data from 64 scalp electrodes were equally high (80-90% correct) in all considered listening environments and remained statistically significant using down to 10 scalp electrodes and short (<30-s) unaveraged EEG segments. In contrast to the robust decoding of the attended talker we found that decoding of the unattended talker deteriorated with the acoustic distortions. These results suggest that cortical activity tracks an attended speech signal in a way that is invariant to acoustic distortions encountered in real-life sound environments. Noise-robust attention decoding additionally suggests a potential utility of stimulus reconstruction techniques in attention-controlled brain-computer interfaces.
Copyright © 2017 Elsevier Inc. All rights reserved.

Entities:  

Keywords:  Acoustic simulations; Auditory attention; Cortical entrainment; Decoding; Delta rhythms; EEG; Speech; Theta rhythms

Mesh:

Year:  2017        PMID: 28412441     DOI: 10.1016/j.neuroimage.2017.04.026

Source DB:  PubMed          Journal:  Neuroimage        ISSN: 1053-8119            Impact factor:   6.556


  23 in total

Review 1.  Machine Learning Approaches to Analyze Speech-Evoked Neurophysiological Responses.

Authors:  Zilong Xie; Rachel Reetzke; Bharath Chandrasekaran
Journal:  J Speech Lang Hear Res       Date:  2019-03-25       Impact factor: 2.297

2.  Pitch, Timbre and Intensity Interdependently Modulate Neural Responses to Salient Sounds.

Authors:  Emine Merve Kaya; Nicolas Huang; Mounya Elhilali
Journal:  Neuroscience       Date:  2020-05-21       Impact factor: 3.590

3.  Generalizable EEG Encoding Models with Naturalistic Audiovisual Stimuli.

Authors:  Maansi Desai; Jade Holder; Cassandra Villarreal; Nat Clark; Brittany Hoang; Liberty S Hamilton
Journal:  J Neurosci       Date:  2021-09-09       Impact factor: 6.167

4.  Cortical adaptation to sound reverberation.

Authors:  Ben D B Willmore; Kerry M M Walker; Nicol S Harper; Aleksandar Z Ivanov; Andrew J King
Journal:  Elife       Date:  2022-05-26       Impact factor: 8.713

5.  Comparing In-ear EOG for Eye-Movement Estimation With Eye-Tracking: Accuracy, Calibration, and Speech Comprehension.

Authors:  Martin A Skoglund; Martin Andersen; Martha M Shiell; Gitte Keidser; Mike Lind Rank; Sergi Rotger-Griful
Journal:  Front Neurosci       Date:  2022-06-30       Impact factor: 5.152

6.  Benefits of triple acoustic beamforming during speech-on-speech masking and sound localization for bilateral cochlear-implant users.

Authors:  David Yun; Todd R Jennings; Gerald Kidd; Matthew J Goupell
Journal:  J Acoust Soc Am       Date:  2021-05       Impact factor: 1.840

7.  Neural tracking of the speech envelope is differentially modulated by attention and language experience.

Authors:  Rachel Reetzke; G Nike Gnanateja; Bharath Chandrasekaran
Journal:  Brain Lang       Date:  2020-12-05       Impact factor: 2.381

8.  Decoding the Attended Speaker From EEG Using Adaptive Evaluation Intervals Captures Fluctuations in Attentional Listening.

Authors:  Manuela Jaeger; Bojana Mirkovic; Martin G Bleichner; Stefan Debener
Journal:  Front Neurosci       Date:  2020-06-16       Impact factor: 4.677

9.  Sensorineural hearing loss degrades behavioral and physiological measures of human spatial selective auditory attention.

Authors:  Lengshi Dai; Virginia Best; Barbara G Shinn-Cunningham
Journal:  Proc Natl Acad Sci U S A       Date:  2018-03-19       Impact factor: 11.205

Review 10.  Recent advances in understanding the auditory cortex.

Authors:  Andrew J King; Sundeep Teki; Ben D B Willmore
Journal:  F1000Res       Date:  2018-09-26
View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.