| Literature DB >> 30154471 |
Felipe Pegado1,2,3, Michelle H A Hendriks4,5, Steffie Amelynck4, Nicky Daniels4, Jessica Bulthé4, Haemy Lee Masson4, Bart Boets6,5, Hans Op de Beeck7.
Abstract
Humans are highly skilled in social reasoning, e.g., inferring thoughts of others. This mentalizing ability systematically recruits brain regions such as Temporo-Parietal Junction (TPJ), Precuneus (PC) and medial Prefrontal Cortex (mPFC). Further, posterior mPFC is associated with allocentric mentalizing and conflict monitoring while anterior mPFC is associated with self-reference (egocentric) processing. Here we extend this work to how we reason not just about what one person thinks but about the abstract shared social norm. We apply functional magnetic resonance imaging to investigate neural representations while participants judge the social congruency between emotional auditory utterances in relation to visual scenes according to how 'most people' would perceive it. Behaviorally, judging according to a social norm increased the similarity of response patterns among participants. Multivoxel pattern analysis revealed that social congruency information was not represented in visual and auditory areas, but was clear in most parts of the mentalizing network: TPJ, PC and posterior (but not anterior) mPFC. Furthermore, interindividual variability in anterior mPFC representations was inversely related to the behavioral ability to adjust to the social norm. Our results suggest that social norm inferencing is associated with a distributed and partially individually specific representation of social congruency in the mentalizing network.Entities:
Mesh:
Year: 2018 PMID: 30154471 PMCID: PMC6113313 DOI: 10.1038/s41598-018-31260-5
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.379
Figure 1Behavioral responses during the scanner. Both for visual and auditory domains, half of the stimuli presented a positive valence and the other half a negative valence. (a) Most frequent binary (congruent vs incongruent) response per Audio-Visual (A-V) stimuli combination (8 audio X 12 visual = 96) across the six runs, at the group level, representing thus the ‘shared social norm’ among participants. (b) % of ‘incongruent’ responses, at the group level. (c) % of ‘incongruent’ responses at the individual level.
Figure 2Behavioral responses outside the scanner using a fine-grained scale. (a) Same task as in the scanner (allocentric reference). Outside the scanner, subjects performed 2 runs of the same social norm perspective task but instead of binary responses, they used here a more fine-grained scale (9 levels). (b) Control task (egocentric reference). A separate group of subjects performed a control task, where the judgements were based on their own perspective of social congruency, using again a 9 level scale (see Methods for details). (c) Comparing egocentric versus allocentric reference judgements. Comparison of the results of the two tasks, so that each of the 45 participants in total is correlated with each other. Colorbar = Spearman’s correlation coefficient. Run = one recording block with all audio-visual combinations presented once.
Figure 3Similarity of Social Congruency Representations. (a) Similarity of behavioral response patterns. Similarity of behavioral response patterns across runs and subjects (left panel) calculated for each pair of runs (Spearman correlations). Within and between subject correlations (right panel upper: cells in white for within [left] and between [right] subject correlations (each line delineates the between-subject correlations for each subject). A behavioral indication of the ability to infer what most people would answer (a social norm mentalizing ‘performance’), was indexed by the individual agreement with their peers (between-subject correlation), normalized by the participant internal noise, i.e., consistency of response across runs (within-subject correlation): a between/within ratio. (b) ROIs with social congruency information. Two conditions (congruent vs incongruent) were defined in a GLM model of the fMRI data and then 2 × 2 neural similarity matrices were created (inset). None of the sensory areas (EVC = Early Visual Cortex; LOC = Lateral Occipital Complex; EAC = Early Auditory Cortex; TVA = Temporal Voice Area) show significant social congruency information content, while three of the mentalizing network ROIs did show (PC = Precuneus; marginally significant; TPJ = Temporo-Parietal Junction and mPFC = medial Prefrontal Cortex in its posterior part). Inset (upper left): neural similarity matrix of congruency. P-values (Bonferroni-corrected): #p = 0.057; £p = 0.013; $p = 0.011. (c) Linking neural and behavioral data. To test if the social norm mentalizing performance (behavioral index) could be explained by differences in social congruency neural data in subparts of mPFC, brain x behavior correlations were performed. Two subjects didn’t meet the ROI criteria in anterior mPFC (see ROIs section in Methods).