Literature DB >> 9637049

Complementarity and synergy in bimodal speech: auditory, visual, and audio-visual identification of French oral vowels in noise.

J Robert-Ribes1, J L Schwartz, T Lallouache, P Escudier.   

Abstract

The efficacy of audio-visual interactions in speech perception comes from two kinds of factors. First, at the information level, there is some "complementarity" of audition and vision: It seems that some speech features, mainly concerned with manner of articulation, are best transmitted by the audio channel, while some other features, mostly describing place of articulation, are best transmitted by the video channel. Second, at the information processing level, there is some "synergy" between audition and vision: The audio-visual global identification scores in a number of different tasks involving acoustic noise are generally greater than both the auditory-alone and the visual-alone scores. However, these two properties have been generally demonstrated until now in rather global terms. In the present work, audio-visual interactions at the feature level are studied for French oral vowels which contrast three series, namely front unrounded, front rounded, and back rounded vowels. A set of experiments on the auditory, visual, and audio-visual identification of vowels embedded in various amounts of noise demonstrate that complementarity and synergy in bimodal speech appear to hold for a bundle of individual phonetic features describing place contrasts in oral vowels. At the information level (complementarity), in the audio channel the height feature is the most robust, backness the second most robust one, and rounding the least, while in the video channel rounding is better than height, and backness is almost invisible. At the information processing (synergy) level, transmitted information scores show that all individual features are better transmitted with the ear and the eye together than with each sensor individually.

Mesh:

Year:  1998        PMID: 9637049     DOI: 10.1121/1.423069

Source DB:  PubMed          Journal:  J Acoust Soc Am        ISSN: 0001-4966            Impact factor:   1.840


  9 in total

Review 1.  The construct of the multisensory temporal binding window and its dysregulation in developmental disabilities.

Authors:  Mark T Wallace; Ryan A Stevenson
Journal:  Neuropsychologia       Date:  2014-08-13       Impact factor: 3.139

2.  Speakers are able to categorize vowels based on tongue somatosensation.

Authors:  Jean-François Patri; David J Ostry; Julien Diard; Jean-Luc Schwartz; Pamela Trudeau-Fisette; Christophe Savariaux; Pascal Perrier
Journal:  Proc Natl Acad Sci U S A       Date:  2020-03-02       Impact factor: 11.205

3.  The Role of Auditory and Visual Speech in Word Learning at 18 Months and in Adulthood.

Authors:  Mélanie Havy; Afra Foroud; Laurel Fais; Janet F Werker
Journal:  Child Dev       Date:  2017-01-26

4.  Learning Spoken Words via the Ears and Eyes: Evidence from 30-Month-Old Children.

Authors:  Mélanie Havy; Pascal Zesiger
Journal:  Front Psychol       Date:  2017-12-08

5.  Auditory and Somatosensory Interaction in Speech Perception in Children and Adults.

Authors:  Paméla Trudeau-Fisette; Takayuki Ito; Lucie Ménard
Journal:  Front Hum Neurosci       Date:  2019-10-04       Impact factor: 3.169

6.  Visual Influence on Auditory Perception of Vowels by French-Speaking Children and Adults.

Authors:  Paméla Trudeau-Fisette; Laureline Arnaud; Lucie Ménard
Journal:  Front Psychol       Date:  2022-02-25

7.  Intelligibility of speech produced by sighted and blind adults.

Authors:  Lucie Ménard; Pamela Trudeau-Fisette; Mark Kenneth Tiede
Journal:  PLoS One       Date:  2022-09-15       Impact factor: 3.752

8.  Assessing the effect of physical differences in the articulation of consonants and vowels on audiovisual temporal perception.

Authors:  Argiro Vatakis; Petros Maragos; Isidoros Rodomagoulakis; Charles Spence
Journal:  Front Integr Neurosci       Date:  2012-10-01

9.  Compensations to auditory feedback perturbations in congenitally blind and sighted speakers: Acoustic and articulatory data.

Authors:  Pamela Trudeau-Fisette; Mark Tiede; Lucie Ménard
Journal:  PLoS One       Date:  2017-07-05       Impact factor: 3.240

  9 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.