Literature DB >> 8423265

A model for context effects in speech recognition.

A W Bronkhorst1, A J Bosman, G F Smoorenburg.   

Abstract

A model is presented that quantifies the effect of context on speech recognition. In this model, a speech stimulus is considered as a concatenation of a number of equivalent elements (e.g., phonemes constituting a word). The model employs probabilities that individual elements are recognized and chances that missed elements are guessed using contextual information. Predictions are given of the probability that the entire stimulus, or part of it, is reproduced correctly. The model can be applied to both speech recognition and visual recognition of printed text. It has been verified with data obtained with syllables of the consonant-vowel-consonant (CVC) type presented near the reception threshold in quiet and in noise, with the results of an experiment using orthographic presentation of incomplete CVC syllables and with results of word counts in a CVC lexicon. A remarkable outcome of the analysis is that the cues which occur only in spoken language (e.g., coarticulatory cues) seem to have a much greater influence on recognition performance when the stimuli are presented near the threshold in noise than when they are presented near the absolute threshold. Demonstrations are given of further predictions provided by the model: word recognition as a function of signal-to-noise ratio, closed-set word recognition, recognition of interrupted speech, and sentence recognition.

Mesh:

Year:  1993        PMID: 8423265     DOI: 10.1121/1.406844

Source DB:  PubMed          Journal:  J Acoust Soc Am        ISSN: 0001-4966            Impact factor:   1.840


  21 in total

1.  The redundancy of phonemes in sentential context.

Authors:  Christian E Stilp
Journal:  J Acoust Soc Am       Date:  2011-11       Impact factor: 1.840

2.  Psychometric functions for sentence recognition in sinusoidally amplitude-modulated noises.

Authors:  Yi Shen; Nicole K Manzano; Virginia M Richards
Journal:  J Acoust Soc Am       Date:  2015-12       Impact factor: 1.840

3.  Masking release for words in amplitude-modulated noise as a function of modulation rate and task.

Authors:  Emily Buss; Lisa N Whittle; John H Grose; Joseph W Hall
Journal:  J Acoust Soc Am       Date:  2009-07       Impact factor: 1.840

4.  The effect of speech material on the band importance function for Mandarin Chinese.

Authors:  Yufan Du; Yi Shen; Xihong Wu; Jing Chen
Journal:  J Acoust Soc Am       Date:  2019-07       Impact factor: 1.840

5.  Efficiency in glimpsing vowel sequences in fluctuating makers: Effects of temporal fine structure and temporal regularity.

Authors:  Yi Shen; Dylan V Pearson
Journal:  J Acoust Soc Am       Date:  2019-04       Impact factor: 1.840

6.  Syllable-constituent perception by hearing-aid users: Common factors in quiet and noise.

Authors:  James D Miller; Charles S Watson; Marjorie R Leek; Judy R Dubno; David J Wark; Pamela E Souza; Sandra Gordon-Salant; Jayne B Ahlstrom
Journal:  J Acoust Soc Am       Date:  2017-04       Impact factor: 1.840

7.  Effects of linear and nonlinear speech rate changes on speech intelligibility in stationary and fluctuating maskers.

Authors:  Martin Cooke; Vincent Aubanel
Journal:  J Acoust Soc Am       Date:  2017-06       Impact factor: 1.840

8.  Evaluation of Speech-Perception Training for Hearing Aid Users: A Multisite Study in Progress.

Authors:  James D Miller; Charles S Watson; Judy R Dubno; Marjorie R Leek
Journal:  Semin Hear       Date:  2015-11

9.  The process of spoken word recognition in the face of signal degradation.

Authors:  Ashley Farris-Trimble; Bob McMurray; Nicole Cigrand; J Bruce Tomblin
Journal:  J Exp Psychol Hum Percept Perform       Date:  2013-09-16       Impact factor: 3.332

10.  On the number of auditory filter outputs needed to understand speech: further evidence for auditory channel independence.

Authors:  Frédéric Apoux; Eric W Healy
Journal:  Hear Res       Date:  2009-06-16       Impact factor: 3.208

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.