Literature DB >> 3594015

Quantifying the contribution of vision to speech perception in noise.

A MacLeod, Q Summerfield.   

Abstract

The intelligibility of sentences presented in noise improves when the listener can view the talker's face. Our aims were to quantify this benefit, and to relate it to individual differences among subjects in lipreading ability and among sentences in lipreading difficulty. Auditory and audiovisual speech-reception thresholds (SRTs) were measured in 20 listeners with normal hearing. Sixty sentences, selected to range in the difficulty with which they could be lipread (with vision alone) from easy to hard, were presented for identification in white noise. Using the ascending method of limits, the SRT was defined as the lowest signal-to-noise ratio at which all three 'key words' in each sentence could be identified correctly. Measured as the difference in dB between auditory-alone and audiovisual SRTs, 'audiovisual benefit' averaged 11 dB, ranging from 6 to 15 dB among subjects, and from 3 to 22 dB among sentences. As predicted, audiovisual benefit is a measure of lipreading ability. It was highly correlated with visual-alone performance (n = 20, r = 0.86, P less than 0.01). Likewise, those sentences which were easiest to lipread gave a higher measure of benefit from vision in audiovisual conditions than did sentences that were hard to lipread (n = 60, r = 0.92, P less than 0.01). The results establish the basis of an efficient test of speech-reception disability in which measures are freed from the floor and ceiling effects encountered when percentage correct is used as the dependent variable.

Entities:  

Mesh:

Year:  1987        PMID: 3594015     DOI: 10.3109/03005368709077786

Source DB:  PubMed          Journal:  Br J Audiol        ISSN: 0300-5364


  93 in total

1.  EEG gamma-band activity during audiovisual speech comprehension in different noise environments.

Authors:  Yanfei Lin; Baolin Liu; Zhiwen Liu; Xiaorong Gao
Journal:  Cogn Neurodyn       Date:  2015-02-22       Impact factor: 5.082

2.  Phonological Priming in Children with Hearing Loss: Effect of Speech Mode, Fidelity, and Lexical Status.

Authors:  Susan Jerger; Nancy Tye-Murray; Markus F Damian; Hervé Abdi
Journal:  Ear Hear       Date:  2016 Nov/Dec       Impact factor: 3.570

3.  Seeing speech affects acoustic information processing in the human brainstem.

Authors:  Gabriella Musacchia; Mikko Sams; Trent Nicol; Nina Kraus
Journal:  Exp Brain Res       Date:  2005-10-11       Impact factor: 1.972

4.  Perceptual fusion and stimulus coincidence in the cross-modal integration of speech.

Authors:  Lee M Miller; Mark D'Esposito
Journal:  J Neurosci       Date:  2005-06-22       Impact factor: 6.167

5.  Cross-language perception of Cantonese vowels spoken by native and non-native speakers.

Authors:  Connie K So; Virginie Attina
Journal:  J Psycholinguist Res       Date:  2014-10

Review 6.  Brain repair after stroke--a novel neurological model.

Authors:  Steven L Small; Giovanni Buccino; Ana Solodkin
Journal:  Nat Rev Neurol       Date:  2013-11-12       Impact factor: 42.937

7.  Spoken word recognition by eye.

Authors:  Edward T Auer
Journal:  Scand J Psychol       Date:  2009-10

8.  Shared and modality-specific brain regions that mediate auditory and visual word comprehension.

Authors:  Anne Keitel; Joachim Gross; Christoph Kayser
Journal:  Elife       Date:  2020-08-24       Impact factor: 8.140

9.  Neural development of networks for audiovisual speech comprehension.

Authors:  Anthony Steven Dick; Ana Solodkin; Steven L Small
Journal:  Brain Lang       Date:  2009-09-24       Impact factor: 2.381

10.  A multisensory cortical network for understanding speech in noise.

Authors:  Christopher W Bishop; Lee M Miller
Journal:  J Cogn Neurosci       Date:  2009-09       Impact factor: 3.225

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.