Literature DB >> 8747830

Visual word recognition in two facial motion conditions: full-face versus lips-plus-mandible.

L K Marassa1, C R Lansing.   

Abstract

The present study used a new method to develop video sequences that limited exposure of facial movement. A repeated-measures design was used to investigate the visual recognition of 60 monosyllabic spoken words, presented in an open set format, for two face exposure conditions (full-face vs. lips-plus-mandible). Twenty-six normal hearing college students and 4 adults with bilateral sensorineural hearing loss speechread a video laserdisc presentation of a male talker under the two face exposure conditions. Percent phoneme correct scores were similar in the part-face and full-face conditions. However, scores significantly improved for the repeated measure independent of the face exposure condition observed. The results suggested that speechreaders (a) can recognize monosyllabic words in video sequences that provide information only about movements of the lips-plus-mandible region and (b) are sensitive to practice effects.

Entities:  

Mesh:

Year:  1995        PMID: 8747830     DOI: 10.1044/jshr.3806.1387

Source DB:  PubMed          Journal:  J Speech Hear Res        ISSN: 0022-4685


  4 in total

1.  "Look who's talking!" Gaze Patterns for Implicit and Explicit Audio-Visual Speech Synchrony Detection in Children With High-Functioning Autism.

Authors:  Ruth B Grossman; Erin Steinhart; Teresa Mitchell; William McIlvane
Journal:  Autism Res       Date:  2015-01-24       Impact factor: 5.216

2.  Fixating the eyes of a speaker provides sufficient visual information to modulate early auditory processing.

Authors:  Elina Kaplan; Alexandra Jesse
Journal:  Biol Psychol       Date:  2019-07-16       Impact factor: 3.251

3.  Face Masks Impact Auditory and Audiovisual Consonant Recognition in Children With and Without Hearing Loss.

Authors:  Kaylah Lalonde; Emily Buss; Margaret K Miller; Lori J Leibold
Journal:  Front Psychol       Date:  2022-05-13

4.  Dorsal-movement and ventral-form regions are functionally connected during visual-speech recognition.

Authors:  Kamila Borowiak; Corrina Maguinness; Katharina von Kriegstein
Journal:  Hum Brain Mapp       Date:  2019-11-20       Impact factor: 5.038

  4 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.