Literature DB >> 8865647

Evaluating the articulation index for auditory-visual consonant recognition.

K W Grant1, B E Walden.   

Abstract

Adequacy of the ANSI standard for calculating the articulation index (AI) [ANSI S3.5-1969 (R1986)] was evaluated by measuring auditory (A), visual (V), and auditory-visual (AV) consonant recognition under a variety of bandpass-filtered speech conditions. Contrary to ANSI predictions, filter conditions having the same auditory AI did not necessarily result in the same auditory-visual AI. Low-frequency bands of speech tended to provide more benefit to AV consonant recognition than high-frequency bands. Analyses of the auditory error patterns produced by the different filter conditions showed a strong negative correlation between the degree of A and V redundancy and the amount of benefit obtained when A and V cues were combined. These data indicate that the ANSI auditory-visual AI procedure is inadequate for predicting AV consonant recognition performance under conditions of severe spectral shaping.

Entities:  

Mesh:

Year:  1996        PMID: 8865647     DOI: 10.1121/1.417950

Source DB:  PubMed          Journal:  J Acoust Soc Am        ISSN: 0001-4966            Impact factor:   1.840


  22 in total

1.  The relative phonetic contributions of a cochlear implant and residual acoustic hearing to bimodal speech perception.

Authors:  Benjamin M Sheffield; Fan-Gang Zeng
Journal:  J Acoust Soc Am       Date:  2012-01       Impact factor: 1.840

2.  Auditory-visual speech perception in normal-hearing and cochlear-implant listeners.

Authors:  Sheetal Desai; Ginger Stickney; Fan-Gang Zeng
Journal:  J Acoust Soc Am       Date:  2008-01       Impact factor: 1.840

3.  Factors affecting the benefits of high-frequency amplification.

Authors:  Amy R Horwitz; Jayne B Ahlstrom; Judy R Dubno
Journal:  J Speech Lang Hear Res       Date:  2008-06       Impact factor: 2.297

4.  Spatiotemporal dynamics of audiovisual speech processing.

Authors:  Lynne E Bernstein; Edward T Auer; Michael Wagner; Curtis W Ponton
Journal:  Neuroimage       Date:  2007-08-31       Impact factor: 6.556

5.  Interdependence of linguistic and indexical speech perception skills in school-age children with early cochlear implantation.

Authors:  Ann E Geers; Lisa S Davidson; Rosalie M Uchanski; Johanna G Nicholas
Journal:  Ear Hear       Date:  2013-09       Impact factor: 3.570

6.  Talking points: A modulating circle reduces listening effort without improving speech recognition.

Authors:  Julia F Strand; Violet A Brown; Dennis L Barbour
Journal:  Psychon Bull Rev       Date:  2019-02

Review 7.  The construct of the multisensory temporal binding window and its dysregulation in developmental disabilities.

Authors:  Mark T Wallace; Ryan A Stevenson
Journal:  Neuropsychologia       Date:  2014-08-13       Impact factor: 3.139

8.  Spontaneous Otoacoustic Emissions Reveal an Efficient Auditory Efferent Network.

Authors:  Viorica Marian; Tuan Q Lam; Sayuri Hayakawa; Sumitrajit Dhar
Journal:  J Speech Lang Hear Res       Date:  2018-11-08       Impact factor: 2.297

9.  Lip-reading aids word recognition most in moderate noise: a Bayesian explanation using high-dimensional feature space.

Authors:  Wei Ji Ma; Xiang Zhou; Lars A Ross; John J Foxe; Lucas C Parra
Journal:  PLoS One       Date:  2009-03-04       Impact factor: 3.240

10.  Audiovisual non-verbal dynamic faces elicit converging fMRI and ERP responses.

Authors:  Julie Brefczynski-Lewis; Svenja Lowitszch; Michael Parsons; Susan Lemieux; Aina Puce
Journal:  Brain Topogr       Date:  2009-04-23       Impact factor: 3.020

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.