Literature DB >> 14700361

Contributions of semantic and facial information to perception of nonsibilant fricatives.

Allard Jongman1, Yue Wang, Brian H Kim.   

Abstract

Most studies have been unable to identify reliable acoustic cues for the recognition of the English nonsibilant fricatives [see text]. The present study was designed to test the extent to which the perception of these fricatives by normal-hearing adults is based on other sources of information, namely, linguistic context and visual information. In Experiment 1, target words beginning with /f/, /theta/, /s/, or [see text] were preceded by either a semantically congruous or incongruous precursor sentence. Results showed an effect of linguistic context on the perception of the distinction between /f/ and /theta/ and on the acoustically more robust distinction between /s/ and [see text]. In Experiment 2, participants identified syllables consisting of the fricatives [see text] paired with the vowels /i, a, u/. Three conditions were contrasted: Stimuli were presented with (a) both auditory and visual information, (b) auditory information alone, or (c) visual information alone. When errors in terms of voicing were ignored in all 3 conditions, results indicated that perception of these fricatives is as good with visual information alone as with both auditory and visual information combined, and better than for auditory information alone. These findings suggest that accurate perception of nonsibilant fricatives derives from a combination of acoustic, linguistic, and visual information.

Entities:  

Mesh:

Year:  2003        PMID: 14700361     DOI: 10.1044/1092-4388(2003/106)

Source DB:  PubMed          Journal:  J Speech Lang Hear Res        ISSN: 1092-4388            Impact factor:   2.297


  4 in total

1.  Cross-modal Association between Auditory and Visuospatial Information in Mandarin Tone Perception in Noise by Native and Non-native Perceivers.

Authors:  Beverly Hannah; Yue Wang; Allard Jongman; Joan A Sereno; Jiguo Cao; Yunlong Nie
Journal:  Front Psychol       Date:  2017-12-04

2.  Non-native Listeners Benefit Less from Gestures and Visible Speech than Native Listeners During Degraded Speech Comprehension.

Authors:  Linda Drijvers; Asli Özyürek
Journal:  Lang Speech       Date:  2019-02-22       Impact factor: 1.500

3.  Audiovisual cues benefit recognition of accented speech in noise but not perceptual adaptation.

Authors:  Briony Banks; Emma Gowen; Kevin J Munro; Patti Adank
Journal:  Front Hum Neurosci       Date:  2015-08-03       Impact factor: 3.169

4.  Degree of Language Experience Modulates Visual Attention to Visible Speech and Iconic Gestures During Clear and Degraded Speech Comprehension.

Authors:  Linda Drijvers; Julija Vaitonytė; Asli Özyürek
Journal:  Cogn Sci       Date:  2019-10
  4 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.