Literature DB >> 23294284

The role of visual speech information in supporting perceptual learning of degraded speech.

Rachel V Wayne1, Ingrid S Johnsrude.   

Abstract

Following cochlear implantation, hearing-impaired listeners must adapt to speech as heard through their prosthesis. Visual speech information (VSI; the lip and facial movements of speech) is typically available in everyday conversation. Here, we investigate whether learning to understand a popular auditory simulation of speech as transduced by a cochlear implant (noise-vocoded [NV] speech) is enhanced by the provision of VSI. Experiment 1 demonstrates that provision of VSI concurrently with a clear auditory form of an utterance as feedback after each NV utterance during training does not enhance learning over clear auditory feedback alone, suggesting that VSI does not play a special role in retuning of perceptual representations of speech. Experiment 2 demonstrates that provision of VSI concurrently with NV speech (a simulation of typical real-world experience) facilitates perceptual learning of NV speech, but only when an NV-only repetition of each utterance is presented after the composite NV/VSI form during training. Experiment 3 shows that this more efficient learning of NV speech is probably due to the additional listening effort required to comprehend the utterance when clear feedback is never provided and is not specifically due to the provision of VSI. Our results suggest that rehabilitation after cochlear implantation does not necessarily require naturalistic audiovisual input, but may be most effective when (a) training utterances are relatively intelligible (approximately 85% of words reported correctly during effortful listening), and (b) the individual has the opportunity to map what they know of an utterance's linguistic content onto the degraded form.

Entities:  

Mesh:

Year:  2012        PMID: 23294284     DOI: 10.1037/a0031042

Source DB:  PubMed          Journal:  J Exp Psychol Appl        ISSN: 1076-898X


  6 in total

Review 1.  Prediction and constraint in audiovisual speech perception.

Authors:  Jonathan E Peelle; Mitchell S Sommers
Journal:  Cortex       Date:  2015-03-20       Impact factor: 4.027

2.  Multisensory training can promote or impede visual perceptual learning of speech stimuli: visual-tactile vs. visual-auditory training.

Authors:  Silvio P Eberhardt; Edward T Auer; Lynne E Bernstein
Journal:  Front Hum Neurosci       Date:  2014-10-31       Impact factor: 3.169

3.  The Efficacy of Short-term Gated Audiovisual Speech Training for Improving Auditory Sentence Identification in Noise in Elderly Hearing Aid Users.

Authors:  Shahram Moradi; Anna Wahlin; Mathias Hällgren; Jerker Rönnberg; Björn Lidestam
Journal:  Front Psychol       Date:  2017-03-13

4.  Audiovisual cues benefit recognition of accented speech in noise but not perceptual adaptation.

Authors:  Briony Banks; Emma Gowen; Kevin J Munro; Patti Adank
Journal:  Front Hum Neurosci       Date:  2015-08-03       Impact factor: 3.169

5.  Audiovisual spoken word training can promote or impede auditory-only perceptual learning: prelingually deafened adults with late-acquired cochlear implants versus normal hearing adults.

Authors:  Lynne E Bernstein; Silvio P Eberhardt; Edward T Auer
Journal:  Front Psychol       Date:  2014-08-26

6.  Working Memory Training and Speech in Noise Comprehension in Older Adults.

Authors:  Rachel V Wayne; Cheryl Hamilton; Julia Jones Huyck; Ingrid S Johnsrude
Journal:  Front Aging Neurosci       Date:  2016-03-22       Impact factor: 5.750

  6 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.