Literature DB >> 29263241

Neural Mechanisms Underlying Cross-Modal Phonetic Encoding.

Antoine J Shahin1, Kristina C Backer2, Lawrence D Rosenblum3, Jess R Kerlin2.   

Abstract

Audiovisual (AV) integration is essential for speech comprehension, especially in adverse listening situations. Divergent, but not mutually exclusive, theories have been proposed to explain the neural mechanisms underlying AV integration. One theory advocates that this process occurs via interactions between the auditory and visual cortices, as opposed to fusion of AV percepts in a multisensory integrator. Building upon this idea, we proposed that AV integration in spoken language reflects visually induced weighting of phonetic representations at the auditory cortex. EEG was recorded while male and female human subjects watched and listened to videos of a speaker uttering consonant vowel (CV) syllables /ba/ and /fa/, presented in Auditory-only, AV congruent or incongruent contexts. Subjects reported whether they heard /ba/ or /fa/. We hypothesized that vision alters phonetic encoding by dynamically weighting which phonetic representation in the auditory cortex is strengthened or weakened. That is, when subjects are presented with visual /fa/ and acoustic /ba/ and hear /fa/ (illusion-fa), the visual input strengthens the weighting of the phone /f/ representation. When subjects are presented with visual /ba/ and acoustic /fa/ and hear /ba/ (illusion-ba), the visual input weakens the weighting of the phone /f/ representation. Indeed, we found an enlarged N1 auditory evoked potential when subjects perceived illusion-ba, and a reduced N1 when they perceived illusion-fa, mirroring the N1 behavior for /ba/ and /fa/ in Auditory-only settings. These effects were especially pronounced in individuals with more robust illusory perception. These findings provide evidence that visual speech modifies phonetic encoding at the auditory cortex.SIGNIFICANCE STATEMENT The current study presents evidence that audiovisual integration in spoken language occurs when one modality (vision) acts on representations of a second modality (audition). Using the McGurk illusion, we show that visual context primes phonetic representations at the auditory cortex, altering the auditory percept, evidenced by changes in the N1 auditory evoked potential. This finding reinforces the theory that audiovisual integration occurs via visual networks influencing phonetic representations in the auditory cortex. We believe that this will lead to the generation of new hypotheses regarding cross-modal mapping, particularly whether it occurs via direct or indirect routes (e.g., via a multisensory mediator).
Copyright © 2018 the authors 0270-6474/18/381835-15$15.00/0.

Entities:  

Keywords:  McGurk illusion; audiovisual integration; auditory evoked potentials; cross-modal perception; speech perception

Mesh:

Year:  2017        PMID: 29263241      PMCID: PMC5815461          DOI: 10.1523/JNEUROSCI.1566-17.2017

Source DB:  PubMed          Journal:  J Neurosci        ISSN: 0270-6474            Impact factor:   6.167


  60 in total

1.  Evidence from functional magnetic resonance imaging of crossmodal binding in the human heteromodal cortex.

Authors:  G A Calvert; R Campbell; M J Brammer
Journal:  Curr Biol       Date:  2000-06-01       Impact factor: 10.834

2.  Bimodal speech: early suppressive visual effects in human auditory cortex.

Authors:  Julien Besle; Alexandra Fort; Claude Delpuech; Marie-Hélène Giard
Journal:  Eur J Neurosci       Date:  2004-10       Impact factor: 3.386

3.  Perceptual fusion and stimulus coincidence in the cross-modal integration of speech.

Authors:  Lee M Miller; Mark D'Esposito
Journal:  J Neurosci       Date:  2005-06-22       Impact factor: 6.167

4.  Neural correlates of multisensory integration of ecologically valid audiovisual events.

Authors:  Jeroen J Stekelenburg; Jean Vroomen
Journal:  J Cogn Neurosci       Date:  2007-12       Impact factor: 3.225

5.  Cortical evoked response to acoustic change within a syllable.

Authors:  J M Ostroff; B A Martin; A Boothroyd
Journal:  Ear Hear       Date:  1998-08       Impact factor: 3.570

6.  Detection of audiovisual speech correspondences without visual awareness.

Authors:  Agnès Alsius; Kevin G Munhall
Journal:  Psychol Sci       Date:  2013-03-05

7.  FieldTrip: Open source software for advanced analysis of MEG, EEG, and invasive electrophysiological data.

Authors:  Robert Oostenveld; Pascal Fries; Eric Maris; Jan-Mathijs Schoffelen
Journal:  Comput Intell Neurosci       Date:  2010-12-23

8.  Eluding the illusion? Schizophrenia, dopamine and the McGurk effect.

Authors:  Thomas P White; Rebekah L Wigton; Dan W Joyce; Tracy Bobin; Christian Ferragamo; Nisha Wasim; Stephen Lisk; Sukhwinder S Shergill
Journal:  Front Hum Neurosci       Date:  2014-08-05       Impact factor: 3.169

9.  Effect of attentional load on audiovisual speech perception: evidence from ERPs.

Authors:  Agnès Alsius; Riikka Möttönen; Mikko E Sams; Salvador Soto-Faraco; Kaisa Tiippana
Journal:  Front Psychol       Date:  2014-07-15

10.  Skilled musicians are not subject to the McGurk effect.

Authors:  Alice M Proverbio; Gemma Massetti; Ezia Rizzi; Alberto Zani
Journal:  Sci Rep       Date:  2016-07-25       Impact factor: 4.379

View more
  10 in total

1.  Cross-modal phonetic encoding facilitates the McGurk illusion and phonemic restoration.

Authors:  Noelle T Abbott; Antoine J Shahin
Journal:  J Neurophysiol       Date:  2018-10-10       Impact factor: 2.714

2.  Neural evidence accounting for interindividual variability of the McGurk illusion.

Authors:  Antoine J Shahin
Journal:  Neurosci Lett       Date:  2019-06-07       Impact factor: 3.046

3.  Audition controls the flow of visual time during multisensory perception.

Authors:  Mariel G Gonzales; Kristina C Backer; Yueqi Yan; Lee M Miller; Heather Bortfeld; Antoine J Shahin
Journal:  iScience       Date:  2022-06-26

4.  A Laboratory Study of the McGurk Effect in 324 Monozygotic and Dizygotic Twins.

Authors:  Guo Feng; Bin Zhou; Wen Zhou; Michael S Beauchamp; John F Magnotti
Journal:  Front Neurosci       Date:  2019-10-04       Impact factor: 4.677

5.  The visual speech head start improves perception and reduces superior temporal cortex responses to auditory speech.

Authors:  Patrick J Karas; John F Magnotti; Brian A Metzger; Lin L Zhu; Kristen B Smith; Daniel Yoshor; Michael S Beauchamp
Journal:  Elife       Date:  2019-08-08       Impact factor: 8.140

6.  Responses to Visual Speech in Human Posterior Superior Temporal Gyrus Examined with iEEG Deconvolution.

Authors:  Brian A Metzger; John F Magnotti; Zhengjia Wang; Elizabeth Nesbitt; Patrick J Karas; Daniel Yoshor; Michael S Beauchamp
Journal:  J Neurosci       Date:  2020-07-29       Impact factor: 6.167

7.  Rethinking the Mechanisms Underlying the McGurk Illusion.

Authors:  Mariel G Gonzales; Kristina C Backer; Brenna Mandujano; Antoine J Shahin
Journal:  Front Hum Neurosci       Date:  2021-04-01       Impact factor: 3.473

8.  A structured ICA-based process for removing auditory evoked potentials.

Authors:  Jessica M Ross; Recep A Ozdemir; Shu Jing Lian; Peter J Fried; Eva M Schmitt; Sharon K Inouye; Alvaro Pascual-Leone; Mouhsin M Shafi
Journal:  Sci Rep       Date:  2022-01-26       Impact factor: 4.996

9.  Functional localization of audiovisual speech using near infrared spectroscopy.

Authors:  Iliza M Butera; Eric D Larson; Andrea J DeFreese; Adrian Kc Lee; René H Gifford; Mark T Wallace
Journal:  Brain Topogr       Date:  2022-07-12       Impact factor: 4.275

10.  Electrophysiological Dynamics of Visual Speech Processing and the Role of Orofacial Effectors for Cross-Modal Predictions.

Authors:  Maëva Michon; Gonzalo Boncompte; Vladimir López
Journal:  Front Hum Neurosci       Date:  2020-10-27       Impact factor: 3.169

  10 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.