Literature DB >> 9604361

Auditory-visual speech recognition by hearing-impaired subjects: consonant recognition, sentence recognition, and auditory-visual integration.

K W Grant1, B E Walden, P F Seitz.   

Abstract

Factors leading to variability in auditory-visual (AV) speech recognition include the subject's ability to extract auditory (A) and visual (V) signal-related cues, the integration of A and V cues, and the use of phonological, syntactic, and semantic context. In this study, measures of A, V, and AV recognition of medial consonants in isolated nonsense syllables and of words in sentences were obtained in a group of 29 hearing-impaired subjects. The test materials were presented in a background of speech-shaped noise at 0-dB signal-to-noise ratio. Most subjects achieved substantial AV benefit for both sets of materials relative to A-alone recognition performance. However, there was considerable variability in AV speech recognition both in terms of the overall recognition score achieved and in the amount of audiovisual gain. To account for this variability, consonant confusions were analyzed in terms of phonetic features to determine the degree of redundancy between A and V sources of information. In addition, a measure of integration ability was derived for each subject using recently developed models of AV integration. The results indicated that (1) AV feature reception was determined primarily by visual place cues and auditory voicing + manner cues, (2) the ability to integrate A and V consonant cues varied significantly across subjects, with better integrators achieving more AV benefit, and (3) significant intra-modality correlations were found between consonant measures and sentence measures, with AV consonant scores accounting for approximately 54% of the variability observed for AV sentence recognition. Integration modeling results suggested that speechreading and AV integration training could be useful for some individuals, potentially providing as much as 26% improvement in AV consonant recognition.

Entities:  

Mesh:

Year:  1998        PMID: 9604361     DOI: 10.1121/1.422788

Source DB:  PubMed          Journal:  J Acoust Soc Am        ISSN: 0001-4966            Impact factor:   1.840


  91 in total

1.  Use of audiovisual information in speech perception by prelingually deaf children with cochlear implants: a first report.

Authors:  L Lachs; D B Pisoni; K I Kirk
Journal:  Ear Hear       Date:  2001-06       Impact factor: 3.570

2.  Bayes factor of model selection validates FLMP.

Authors:  D W Massaro; M M Cohen; C S Campbell; T Rodriguez
Journal:  Psychon Bull Rev       Date:  2001-03

3.  Assessing spoken word recognition in children who are deaf or hard of hearing: a translational approach.

Authors:  Karen Iler Kirk; Lindsay Prusick; Brian French; Chad Gotch; Laurie S Eisenberg; Nancy Young
Journal:  J Am Acad Audiol       Date:  2012-06       Impact factor: 1.664

4.  A Method for Transcribing the Manual Components of Cued Speech.

Authors:  Jean C Krause; Katherine A Pelley-Lopez; Morgan P Tessler
Journal:  Speech Commun       Date:  2011-03-01       Impact factor: 2.017

5.  Talker and lexical effects on audiovisual word recognition by adults with cochlear implants.

Authors:  Adam R Kaiser; Karen Iler Kirk; Lorin Lachs; David B Pisoni
Journal:  J Speech Lang Hear Res       Date:  2003-04       Impact factor: 2.297

6.  Specification of cross-modal source information in isolated kinematic displays of speech.

Authors:  Lorin Lachs; David B Pisoni
Journal:  J Acoust Soc Am       Date:  2004-07       Impact factor: 1.840

7.  Integration of Partial Information Within and Across Modalities: Contributions to Spoken and Written Sentence Recognition.

Authors:  Kimberly G Smith; Daniel Fogerty
Journal:  J Speech Lang Hear Res       Date:  2015-12       Impact factor: 2.297

8.  Perceptual fusion and stimulus coincidence in the cross-modal integration of speech.

Authors:  Lee M Miller; Mark D'Esposito
Journal:  J Neurosci       Date:  2005-06-22       Impact factor: 6.167

9.  Neural processing of asynchronous audiovisual speech perception.

Authors:  Ryan A Stevenson; Nicholas A Altieri; Sunah Kim; David B Pisoni; Thomas W James
Journal:  Neuroimage       Date:  2010-02-15       Impact factor: 6.556

10.  Voiced initial consonant perception deficits in older listeners with hearing loss and good and poor word recognition.

Authors:  Susan L Phillips; Scott J Richter; David McPherson
Journal:  J Speech Lang Hear Res       Date:  2008-07-29       Impact factor: 2.297

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.