Literature DB >> 30978494

Hearing-impaired listeners show increased audiovisual benefit when listening to speech in noise.

Sebastian Puschmann1, Mareike Daeglau2, Maren Stropahl3, Bojana Mirkovic4, Stephanie Rosemann5, Christiane M Thiel6, Stefan Debener7.   

Abstract

Recent studies provide evidence for changes in audiovisual perception as well as for adaptive cross-modal auditory cortex plasticity in older individuals with high-frequency hearing impairments (presbycusis). We here investigated whether these changes facilitate the use of visual information, leading to an increased audiovisual benefit of hearing-impaired individuals when listening to speech in noise. We used a naturalistic design in which older participants with a varying degree of high-frequency hearing loss attended to running auditory or audiovisual speech in noise and detected rare target words. Passages containing only visual speech served as a control condition. Simultaneously acquired scalp electroencephalography (EEG) data were used to study cortical speech tracking. Target word detection accuracy was significantly increased in the audiovisual as compared to the auditory listening condition. The degree of this audiovisual enhancement was positively related to individual high-frequency hearing loss and subjectively reported listening effort in challenging daily life situations, which served as a subjective marker of hearing problems. On the neural level, the early cortical tracking of the speech envelope was enhanced in the audiovisual condition. Similar to the behavioral findings, individual differences in the magnitude of the enhancement were positively associated with listening effort ratings. Our results therefore suggest that hearing-impaired older individuals make increased use of congruent visual information to compensate for the degraded auditory input.
Copyright © 2019 Elsevier Inc. All rights reserved.

Entities:  

Keywords:  Audiovisual speech; Cross-modal plasticity; EEG; Multisensory processing; Presbycusis

Year:  2019        PMID: 30978494     DOI: 10.1016/j.neuroimage.2019.04.017

Source DB:  PubMed          Journal:  Neuroimage        ISSN: 1053-8119            Impact factor:   6.556


  8 in total

1.  Vision perceptually restores auditory spectral dynamics in speech.

Authors:  John Plass; David Brang; Satoru Suzuki; Marcia Grabowecky
Journal:  Proc Natl Acad Sci U S A       Date:  2020-07-06       Impact factor: 11.205

2.  MEG Intersubject Phase Locking of Stimulus-Driven Activity during Naturalistic Speech Listening Correlates with Musical Training.

Authors:  Sebastian Puschmann; Mor Regev; Sylvain Baillet; Robert J Zatorre
Journal:  J Neurosci       Date:  2021-02-03       Impact factor: 6.167

3.  Generalizable EEG Encoding Models with Naturalistic Audiovisual Stimuli.

Authors:  Maansi Desai; Jade Holder; Cassandra Villarreal; Nat Clark; Brittany Hoang; Liberty S Hamilton
Journal:  J Neurosci       Date:  2021-09-09       Impact factor: 6.167

Review 4.  Hearing and speech processing in midlife.

Authors:  Karen S Helfer; Alexandra Jesse
Journal:  Hear Res       Date:  2020-10-17       Impact factor: 3.208

5.  Decoding the Attended Speaker From EEG Using Adaptive Evaluation Intervals Captures Fluctuations in Attentional Listening.

Authors:  Manuela Jaeger; Bojana Mirkovic; Martin G Bleichner; Stefan Debener
Journal:  Front Neurosci       Date:  2020-06-16       Impact factor: 4.677

6.  Treatment of Age-Related Hearing Loss Alters Audiovisual Integration and Resting-State Functional Connectivity: A Randomized Controlled Pilot Trial.

Authors:  Stephanie Rosemann; Anja Gieseler; Maike Tahden; Hans Colonius; Christiane M Thiel
Journal:  eNeuro       Date:  2021-12-08

7.  fNIRS Assessment of Speech Comprehension in Children with Normal Hearing and Children with Hearing Aids in Virtual Acoustic Environments: Pilot Data and Practical Recommendations.

Authors:  Laura Bell; Z Ellen Peng; Florian Pausch; Vanessa Reindl; Christiane Neuschaefer-Rube; Janina Fels; Kerstin Konrad
Journal:  Children (Basel)       Date:  2020-11-07

8.  Speech-Driven Facial Animations Improve Speech-in-Noise Comprehension of Humans.

Authors:  Enrico Varano; Konstantinos Vougioukas; Pingchuan Ma; Stavros Petridis; Maja Pantic; Tobias Reichenbach
Journal:  Front Neurosci       Date:  2022-01-05       Impact factor: 4.677

  8 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.