Literature DB >> 20853377

Visual phonetic processing localized using speech and nonspeech face gestures in video and point-light displays.

Lynne E Bernstein1, Jintao Jiang, Dimitrios Pantazis, Zhong-Lin Lu, Anand Joshi.   

Abstract

The talking face affords multiple types of information. To isolate cortical sites with responsibility for integrating linguistically relevant visual speech cues, speech and nonspeech face gestures were presented in natural video and point-light displays during fMRI scanning at 3.0T. Participants with normal hearing viewed the stimuli and also viewed localizers for the fusiform face area (FFA), the lateral occipital complex (LOC), and the visual motion (V5/MT) regions of interest (ROIs). The FFA, the LOC, and V5/MT were significantly less activated for speech relative to nonspeech and control stimuli. Distinct activation of the posterior superior temporal sulcus and the adjacent middle temporal gyrus to speech, independent of media, was obtained in group analyses. Individual analyses showed that speech and nonspeech stimuli were associated with adjacent but different activations, with the speech activations more anterior. We suggest that the speech activation area is the temporal visual speech area (TVSA), and that it can be localized with the combination of stimuli used in this study.
Copyright © 2010 Wiley-Liss, Inc.

Entities:  

Mesh:

Substances:

Year:  2010        PMID: 20853377      PMCID: PMC3120928          DOI: 10.1002/hbm.21139

Source DB:  PubMed          Journal:  Hum Brain Mapp        ISSN: 1065-9471            Impact factor:   5.038


  87 in total

1.  Brain areas involved in perception of biological motion.

Authors:  E Grossman; M Donnelly; R Price; D Pickens; V Morgan; G Neighbor; R Blake
Journal:  J Cogn Neurosci       Date:  2000-09       Impact factor: 3.225

Review 2.  On the neuronal basis for multisensory convergence: a brief overview.

Authors:  M Alex Meredith
Journal:  Brain Res Cogn Brain Res       Date:  2002-06

3.  Functional anatomy of biological motion perception in posterior temporal cortex: an FMRI study of eye, mouth and hand movements.

Authors:  Kevin A Pelphrey; James P Morris; Charles R Michelich; Truett Allison; Gregory McCarthy
Journal:  Cereb Cortex       Date:  2005-03-02       Impact factor: 5.357

4.  Statistical criteria in FMRI studies of multisensory integration.

Authors:  Michael S Beauchamp
Journal:  Neuroinformatics       Date:  2005

5.  Point-light facial displays enhance comprehension of speech in noise.

Authors:  L D Rosenblum; J A Johnson; H M Saldaña
Journal:  J Speech Hear Res       Date:  1996-12

6.  Object-related activity revealed by functional magnetic resonance imaging in human occipital cortex.

Authors:  R Malach; J B Reppas; R R Benson; K K Kwong; H Jiang; W A Kennedy; P J Ledden; T J Brady; B R Rosen; R B Tootell
Journal:  Proc Natl Acad Sci U S A       Date:  1995-08-29       Impact factor: 11.205

7.  Cross-modal integration and plastic changes revealed by lip movement, random-dot motion and sign languages in the hearing and deaf.

Authors:  Norihiro Sadato; Tomohisa Okada; Manabu Honda; Ken-Ichi Matsuki; Masaki Yoshida; Ken-Ichi Kashikura; Wataru Takei; Tetsuhiro Sato; Takanori Kochiyama; Yoshiharu Yonekura
Journal:  Cereb Cortex       Date:  2004-11-24       Impact factor: 5.357

8.  Similarity structure in visual speech perception and optical phonetic signals.

Authors:  Jintao Jiang; Edward T Auer; Abeer Alwan; Patricia A Keating; Lynne E Bernstein
Journal:  Percept Psychophys       Date:  2007-10

Review 9.  From sensation to cognition.

Authors:  M M Mesulam
Journal:  Brain       Date:  1998-06       Impact factor: 13.501

10.  FMRI responses to video and point-light displays of moving humans and manipulable objects.

Authors:  Michael S Beauchamp; Kathryn E Lee; James V Haxby; Alex Martin
Journal:  J Cogn Neurosci       Date:  2003-10-01       Impact factor: 3.225

View more
  27 in total

1.  Dynamic changes in superior temporal sulcus connectivity during perception of noisy audiovisual speech.

Authors:  Audrey R Nath; Michael S Beauchamp
Journal:  J Neurosci       Date:  2011-02-02       Impact factor: 6.167

2.  Speech comprehension aided by multiple modalities: behavioural and neural interactions.

Authors:  Carolyn McGettigan; Andrew Faulkner; Irene Altarelli; Jonas Obleser; Harriet Baverstock; Sophie K Scott
Journal:  Neuropsychologia       Date:  2012-01-17       Impact factor: 3.139

3.  Free viewing of talking faces reveals mouth and eye preferring regions of the human superior temporal sulcus.

Authors:  Johannes Rennig; Michael S Beauchamp
Journal:  Neuroimage       Date:  2018-08-06       Impact factor: 6.556

4.  Neural networks supporting audiovisual integration for speech: A large-scale lesion study.

Authors:  Gregory Hickok; Corianne Rogalsky; William Matchin; Alexandra Basilakos; Julia Cai; Sara Pillay; Michelle Ferrill; Soren Mickelsen; Steven W Anderson; Tracy Love; Jeffrey Binder; Julius Fridriksson
Journal:  Cortex       Date:  2018-04-10       Impact factor: 4.027

5.  A neural basis for interindividual differences in the McGurk effect, a multisensory speech illusion.

Authors:  Audrey R Nath; Michael S Beauchamp
Journal:  Neuroimage       Date:  2011-07-20       Impact factor: 6.556

6.  Auditory, Visual and Audiovisual Speech Processing Streams in Superior Temporal Sulcus.

Authors:  Jonathan H Venezia; Kenneth I Vaden; Feng Rong; Dale Maddox; Kourosh Saberi; Gregory Hickok
Journal:  Front Hum Neurosci       Date:  2017-04-07       Impact factor: 3.169

7.  Perception drives production across sensory modalities: A network for sensorimotor integration of visual speech.

Authors:  Jonathan H Venezia; Paul Fillmore; William Matchin; A Lisette Isenberg; Gregory Hickok; Julius Fridriksson
Journal:  Neuroimage       Date:  2015-11-28       Impact factor: 6.556

8.  Comparison of landmark-based and automatic methods for cortical surface registration.

Authors:  Dimitrios Pantazis; Anand Joshi; Jintao Jiang; David W Shattuck; Lynne E Bernstein; Hanna Damasio; Richard M Leahy
Journal:  Neuroimage       Date:  2009-09-28       Impact factor: 6.556

9.  Lip-Reading Enables the Brain to Synthesize Auditory Features of Unknown Silent Speech.

Authors:  Mathieu Bourguignon; Martijn Baart; Efthymia C Kapnoula; Nicola Molinaro
Journal:  J Neurosci       Date:  2019-12-30       Impact factor: 6.167

10.  Auditory Perceptual Learning for Speech Perception Can be Enhanced by Audiovisual Training.

Authors:  Lynne E Bernstein; Edward T Auer; Silvio P Eberhardt; Jintao Jiang
Journal:  Front Neurosci       Date:  2013-03-18       Impact factor: 4.677

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.