Literature DB >> 19894841

Microscopic prediction of speech recognition for listeners with normal hearing in noise using an auditory model.

Tim Jürgens1, Thomas Brand.   

Abstract

This study compares the phoneme recognition performance in speech-shaped noise of a microscopic model for speech recognition with the performance of normal-hearing listeners. "Microscopic" is defined in terms of this model twofold. First, the speech recognition rate is predicted on a phoneme-by-phoneme basis. Second, microscopic modeling means that the signal waveforms to be recognized are processed by mimicking elementary parts of human's auditory processing. The model is based on an approach by Holube and Kollmeier [J. Acoust. Soc. Am. 100, 1703-1716 (1996)] and consists of a psychoacoustically and physiologically motivated preprocessing and a simple dynamic-time-warp speech recognizer. The model is evaluated while presenting nonsense speech in a closed-set paradigm. Averaged phoneme recognition rates, specific phoneme recognition rates, and phoneme confusions are analyzed. The influence of different perceptual distance measures and of the model's a-priori knowledge is investigated. The results show that human performance can be predicted by this model using an optimal detector, i.e., identical speech waveforms for both training of the recognizer and testing. The best model performance is yielded by distance measures which focus mainly on small perceptual distances and neglect outliers.

Entities:  

Mesh:

Year:  2009        PMID: 19894841     DOI: 10.1121/1.3224721

Source DB:  PubMed          Journal:  J Acoust Soc Am        ISSN: 0001-4966            Impact factor:   1.840


  9 in total

1.  [Speech perception with hearing aids in comparison to pure-tone hearing loss].

Authors:  U Hoppe; A Hast; T Hocke
Journal:  HNO       Date:  2014-06       Impact factor: 1.284

2.  Speech recognition error patterns for steady-state noise and interrupted speech.

Authors:  Kimberly G Smith; Daniel Fogerty
Journal:  J Acoust Soc Am       Date:  2017-09       Impact factor: 1.840

3.  Sentence Recognition Prediction for Hearing-impaired Listeners in Stationary and Fluctuation Noise With FADE: Empowering the Attenuation and Distortion Concept by Plomp With a Quantitative Processing Model.

Authors:  Birger Kollmeier; Marc René Schädler; Anna Warzybok; Bernd T Meyer; Thomas Brand
Journal:  Trends Hear       Date:  2016-09-07       Impact factor: 3.293

4.  The effects of electrical field spatial spread and some cognitive factors on speech-in-noise performance of individual cochlear implant users-A computer model study.

Authors:  Tim Jürgens; Volker Hohmann; Andreas Büchner; Waldo Nogueira
Journal:  PLoS One       Date:  2018-04-13       Impact factor: 3.240

5.  Spatial Speech-in-Noise Performance in Bimodal and Single-Sided Deaf Cochlear Implant Users.

Authors:  Ben Williges; Thomas Wesarg; Lorenz Jung; Leontien I Geven; Andreas Radeloff; Tim Jürgens
Journal:  Trends Hear       Date:  2019 Jan-Dec       Impact factor: 3.293

6.  Impact of depression on speech perception in noise.

Authors:  Zilong Xie; Benjamin D Zinszer; Meredith Riggs; Christopher G Beevers; Bharath Chandrasekaran
Journal:  PLoS One       Date:  2019-08-15       Impact factor: 3.240

7.  Auditory Nerve Fiber Discrimination and Representation of Naturally-Spoken Vowels in Noise.

Authors:  Amarins N Heeringa; Christine Köppl
Journal:  eNeuro       Date:  2022-02-14

8.  Modelling speech reception thresholds and their improvements due to spatial noise reduction algorithms in bimodal cochlear implant users.

Authors:  Ayham Zedan; Tim Jürgens; Ben Williges; David Hülsmeier; Birger Kollmeier
Journal:  Hear Res       Date:  2022-04-11       Impact factor: 3.672

9.  Objective Prediction of Hearing Aid Benefit Across Listener Groups Using Machine Learning: Speech Recognition Performance With Binaural Noise-Reduction Algorithms.

Authors:  Marc R Schädler; Anna Warzybok; Birger Kollmeier
Journal:  Trends Hear       Date:  2018 Jan-Dec       Impact factor: 3.293

  9 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.