Literature DB >> 32711066

Processing communicative facial and vocal cues in the superior temporal sulcus.

Ben Deen1, Rebecca Saxe2, Nancy Kanwisher2.   

Abstract

Facial and vocal cues provide critical social information about other humans, including their emotional and attentional states and the content of their speech. Recent work has shown that the face-responsive region of posterior superior temporal sulcus ("fSTS") also responds strongly to vocal sounds. Here, we investigate the functional role of this region and the broader STS by measuring responses to a range of face movements, vocal sounds, and hand movements using fMRI. We find that the fSTS responds broadly to different types of audio and visual face action, including both richly social communicative actions, as well as minimally social noncommunicative actions, ruling out hypotheses of specialization for processing speech signals, or communicative signals more generally. Strikingly, however, responses to hand movements were very low, whether communicative or not, indicating a specific role in the analysis of face actions (facial and vocal), not a general role in the perception of any human action. Furthermore, spatial patterns of response in this region were able to decode communicative from noncommunicative face actions, both within and across modality (facial/vocal cues), indicating sensitivity to an abstract social dimension. These functional properties of the fSTS contrast with a region of middle STS that has a selective, largely unimodal auditory response to speech sounds over both communicative and noncommunicative vocal nonspeech sounds, and nonvocal sounds. Region of interest analyses were corroborated by a data-driven independent component analysis, identifying face-voice and auditory speech responses as dominant sources of voxelwise variance across the STS. These results suggest that the STS contains separate processing streams for the audiovisual analysis of face actions and auditory speech processing.
Copyright © 2020. Published by Elsevier Inc.

Entities:  

Year:  2020        PMID: 32711066     DOI: 10.1016/j.neuroimage.2020.117191

Source DB:  PubMed          Journal:  Neuroimage        ISSN: 1053-8119            Impact factor:   6.556


  3 in total

1.  Learning from other minds: An optimistic critique of reinforcement learning models of social learning.

Authors:  Natalia Vélez; Hyowon Gweon
Journal:  Curr Opin Behav Sci       Date:  2021-03-23

2.  Emotional prosody recognition is impaired in Alzheimer's disease.

Authors:  Jana Amlerova; Jan Laczó; Zuzana Nedelska; Martina Laczó; Martin Vyhnálek; Bing Zhang; Kateřina Sheardova; Francesco Angelucci; Ross Andel; Jakub Hort
Journal:  Alzheimers Res Ther       Date:  2022-04-05       Impact factor: 8.823

Review 3.  Faces and Voices Processing in Human and Primate Brains: Rhythmic and Multimodal Mechanisms Underlying the Evolution and Development of Speech.

Authors:  Maëva Michon; José Zamorano-Abramson; Francisco Aboitiz
Journal:  Front Psychol       Date:  2022-03-30
  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.