Literature DB >> 23727570

The time course of auditory-visual processing of speech and body actions: evidence for the simultaneous activation of an extended neural network for semantic processing.

Georg F Meyer1, Neil R Harrison, Sophie M Wuerger.   

Abstract

An extensive network of cortical areas is involved in multisensory object and action recognition. This network draws on inferior frontal, posterior temporal, and parietal areas; activity is modulated by familiarity and the semantic congruency of auditory and visual component signals even if semantic incongruences are created by combining visual and auditory signals representing very different signal categories, such as speech and whole body actions. Here we present results from a high-density ERP study designed to examine the time-course and source location of responses to semantically congruent and incongruent audiovisual speech and body actions to explore whether the network involved in action recognition consists of a hierarchy of sequentially activated processing modules or a network of simultaneously active processing sites. We report two main results:1) There are no significant early differences in the processing of congruent and incongruent audiovisual action sequences. The earliest difference between congruent and incongruent audiovisual stimuli occurs between 240 and 280 ms after stimulus onset in the left temporal region. Between 340 and 420 ms, semantic congruence modulates responses in central and right frontal areas. Late differences (after 460 ms) occur bilaterally in frontal areas.2) Source localisation (dipole modelling and LORETA) reveals that an extended network encompassing inferior frontal, temporal, parasaggital, and superior parietal sites are simultaneously active between 180 and 420 ms to process auditory–visual action sequences. Early activation (before 120 ms) can be explained by activity in mainly sensory cortices. . The simultaneous activation of an extended network between 180 and 420 ms is consistent with models that posit parallel processing of complex action sequences in frontal, temporal and parietal areas rather than models that postulate hierarchical processing in a sequence of brain regions.
Copyright © 2013 Elsevier Ltd. All rights reserved.

Entities:  

Keywords:  Biological motion; EEG; Multisensory; Semantic processing; Speech

Mesh:

Year:  2013        PMID: 23727570     DOI: 10.1016/j.neuropsychologia.2013.05.014

Source DB:  PubMed          Journal:  Neuropsychologia        ISSN: 0028-3932            Impact factor:   3.139


  8 in total

Review 1.  Hearing and seeing meaning in speech and gesture: insights from brain and behaviour.

Authors:  Aslı Özyürek
Journal:  Philos Trans R Soc Lond B Biol Sci       Date:  2014-09-19       Impact factor: 6.237

2.  Resolving the time course of visual and auditory object categorization.

Authors:  Polina Iamshchinina; Agnessa Karapetian; Daniel Kaiser; Radoslaw M Cichy
Journal:  J Neurophysiol       Date:  2022-05-18       Impact factor: 2.974

3.  Shared brain lateralization patterns in language and Acheulean stone tool production: a functional transcranial Doppler ultrasound study.

Authors:  Natalie Thaïs Uomini; Georg Friedrich Meyer
Journal:  PLoS One       Date:  2013-08-30       Impact factor: 3.240

4.  Inferring common cognitive mechanisms from brain blood-flow lateralization data: a new methodology for fTCD analysis.

Authors:  Georg F Meyer; Amy Spray; Jo E Fairlie; Natalie T Uomini
Journal:  Front Psychol       Date:  2014-06-16

5.  Content congruency and its interplay with temporal synchrony modulate integration between rhythmic audiovisual streams.

Authors:  Yi-Huang Su
Journal:  Front Integr Neurosci       Date:  2014-12-08

6.  Human infants detect other people's interactions based on complex patterns of kinematic information.

Authors:  Martyna A Galazka; Laëtitia Roché; Pär Nyström; Terje Falck-Ytter
Journal:  PLoS One       Date:  2014-11-19       Impact factor: 3.240

7.  The effect of synesthetic associations between the visual and auditory modalities on the Colavita effect.

Authors:  Jeroen J Stekelenburg; Mirjam Keetels
Journal:  Exp Brain Res       Date:  2015-07-01       Impact factor: 1.972

8.  Cross-modal social attention triggered by biological motion cues.

Authors:  Yiwen Yu; Haoyue Ji; Li Wang; Yi Jiang
Journal:  J Vis       Date:  2020-10-01       Impact factor: 2.240

  8 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.