Literature DB >> 34965362

During Lipreading Training With Sentence Stimuli, Feedback Controls Learning and Generalization to Audiovisual Speech in Noise.

Lynne E Bernstein1, Edward T Auer1, Silvio P Eberhardt1.   

Abstract

PURPOSE: This study investigated the effects of external feedback on perceptual learning of visual speech during lipreading training with sentence stimuli. The goal was to improve visual-only (VO) speech recognition and increase accuracy of audiovisual (AV) speech recognition in noise. The rationale was that spoken word recognition depends on the accuracy of sublexical (phonemic/phonetic) speech perception; effective feedback during training must support sublexical perceptual learning.
METHOD: Normal-hearing (NH) adults were assigned to one of three types of feedback: Sentence feedback was the entire sentence printed after responding to the stimulus. Word feedback was the correct response words and perceptually near but incorrect response words. Consonant feedback was correct response words and consonants in incorrect but perceptually near response words. Six training sessions were given. Pre- and posttraining testing included an untrained control group. Test stimuli were disyllable nonsense words for forced-choice consonant identification, and isolated words and sentences for open-set identification. Words and sentences were VO, AV, and audio-only (AO) with the audio in speech-shaped noise.
RESULTS: Lipreading accuracy increased during training. Pre- and posttraining tests of consonant identification showed no improvement beyond test-retest increases obtained by untrained controls. Isolated word recognition with a talker not seen during training showed that the control group improved more than the sentence group. Tests of untrained sentences showed that the consonant group significantly improved in all of the stimulus conditions (VO, AO, and AV). Its mean words correct scores increased by 9.2 percentage points for VO, 3.4 percentage points for AO, and 9.8 percentage points for AV stimuli.
CONCLUSIONS: Consonant feedback during training with sentences stimuli significantly increased perceptual learning. The training generalized to untrained VO, AO, and AV sentence stimuli. Lipreading training has potential to significantly improve adults' face-to-face communication in noisy settings in which the talker can be seen.

Entities:  

Mesh:

Year:  2021        PMID: 34965362      PMCID: PMC9128727          DOI: 10.1044/2021_AJA-21-00034

Source DB:  PubMed          Journal:  Am J Audiol        ISSN: 1059-0889            Impact factor:   1.636


  52 in total

1.  Speech perception without hearing.

Authors:  L E Bernstein; M E Demorest; P E Tucker
Journal:  Percept Psychophys       Date:  2000-02

2.  Speechreading sentences with single-channel vibrotactile presentation of voice fundamental frequency.

Authors:  S P Eberhardt; L E Bernstein; M E Demorest; M H Goldstein
Journal:  J Acoust Soc Am       Date:  1990-09       Impact factor: 1.840

3.  Interaction of audition and vision in the recognition of oral speech stimuli.

Authors:  N P Erber
Journal:  J Speech Hear Res       Date:  1969-06

4.  Mixed training at high and low accuracy levels leads to perceptual learning without feedback.

Authors:  Jiajuan Liu; Zhong-Lin Lu; Barbara Anne Dosher
Journal:  Vision Res       Date:  2011-12-29       Impact factor: 1.886

5.  Decoding the Cortical Dynamics of Sound-Meaning Mapping.

Authors:  Ece Kocagoncu; Alex Clarke; Barry J Devereux; Lorraine K Tyler
Journal:  J Neurosci       Date:  2016-12-27       Impact factor: 6.167

6.  Linear and nonlinear relationships between visual stimuli, EEG and BOLD fMRI signals.

Authors:  Zhongming Liu; Cristina Rios; Nanyin Zhang; Lin Yang; Wei Chen; Bin He
Journal:  Neuroimage       Date:  2010-01-15       Impact factor: 6.556

7.  Effect of two approaches to auditory training on speech recognition by hearing-impaired adults.

Authors:  A Rubinstein; A Boothroyd
Journal:  J Speech Hear Res       Date:  1987-06

8.  Enhanced visual speech perception in individuals with early-onset hearing impairment.

Authors:  Edward T Auer; Lynne E Bernstein
Journal:  J Speech Lang Hear Res       Date:  2007-10       Impact factor: 2.297

9.  Errors on a Speech-in-Babble Sentence Recognition Test Reveal Individual Differences in Acoustic Phonetic Perception and Babble Misallocations.

Authors:  Lynne E Bernstein; Silvio P Eberhardt; Edward T Auer
Journal:  Ear Hear       Date:  2021 May/Jun       Impact factor: 3.570

10.  Multisensory training can promote or impede visual perceptual learning of speech stimuli: visual-tactile vs. visual-auditory training.

Authors:  Silvio P Eberhardt; Edward T Auer; Lynne E Bernstein
Journal:  Front Hum Neurosci       Date:  2014-10-31       Impact factor: 3.169

View more
  1 in total

Review 1.  Lipreading: A Review of Its Continuing Importance for Speech Recognition With an Acquired Hearing Loss and Possibilities for Effective Training.

Authors:  Lynne E Bernstein; Nicole Jordan; Edward T Auer; Silvio P Eberhardt
Journal:  Am J Audiol       Date:  2022-03-22       Impact factor: 1.636

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.