Literature DB >> 29092076

Looking at the face and seeing the whole body. Neural basis of combined face and body expressions.

Marta Poyo Solanas1, Minye Zhan1, Maarten Vaessen1, Ruud Hortensius1, Tahnée Engelen1, Beatrice de Gelder1,2.   

Abstract

In the natural world, faces are not isolated objects but are rather encountered in the context of the whole body. Previous work has studied the perception of combined faces and bodies using behavioural and electrophysiological measurements, but the neural correlates of emotional face-body perception still remain unexplored. Here, we combined happy and fearful faces and bodies to investigate the influence of body expressions on the neural processing of the face, the effect of emotional ambiguity between the two and the role of the amygdala in this process. Our functional magnetic resonance imaging analyses showed that the activity in motor, prefrontal and visual areas increases when facial expressions are presented together with bodies rather than in isolation, consistent with the notion that seeing body expressions triggers both emotional and action-related processes. In contrast, psychophysiological interaction analyses revealed that amygdala modulatory activity increases after the presentation of isolated faces when compared to combined faces and bodies. Furthermore, a facial expression combined with a congruent body enhanced both cortical activity and amygdala functional connectivity when compared to an incongruent face-body compound. Finally, the results showed that emotional body postures influence the processing of facial expressions, especially when the emotion conveyed by the body implies danger.
© The Author (2017). Published by Oxford University Press.

Entities:  

Keywords:  amygdala; body; emotion; fMRI; face

Mesh:

Year:  2018        PMID: 29092076      PMCID: PMC5793719          DOI: 10.1093/scan/nsx130

Source DB:  PubMed          Journal:  Soc Cogn Affect Neurosci        ISSN: 1749-5016            Impact factor:   3.436


Introduction

Emotional signalling systems are important regulators of social and adaptive behaviour. In this respect, one of the most relevant and studied sources of emotional information is the facial expression (Haxby ; Adolphs, 2002; Posamentier and Abdi, 2003; Fusar-Poli ). However, although faces have been studied separately for decades, they are not isolated signals but most often seen together with other sources of information. Previous studies have shown that the perception of facial expressions is influenced by various other signals, such as body expressions (Meeren ; Van den Stock ; Aviezer ), emotional voices (de Gelder and Vroomen, 2000) and background scenes (Van den Stock and de Gelder, 2012; Van den Stock ). Here, we specifically investigate how our perception of facial expressions is influenced by emotional body postures. Previous research on the combined perception of faces and bodies has so far used behavioural and electroencephalography (EEG) measurements. For example, participants are strongly biased by the expression of the body when judging facial expressions, even when stimuli are shown briefly (Meeren ; Van den Stock ; Aviezer ). Using EEG, it was shown that the information conveyed by the face and the body is combined at early stages of emotional recognition (Meeren ). These results demonstrate the importance of emotional body expressions in the naturalistic understanding of the role of facial expressions. To date, the neural correlates of the perception of combined faces and bodies still remain largely unexplored. The aim of the current study was to elucidate the underlying mechanisms of the processing of emotional face–body compounds, the effect of body expressions on the processing of facial expressions and the effect of emotional ambiguity between the two. We also investigated the modulatory role of the amygdala in the perception of these stimuli, since this area is involved in emotional face and body perception (Morris , 1998; Hadjikhani and de Gelder, 2003; de Gelder ) and it is known to be sensitive to ambiguity (de Gelder ; Davis ; Hortensius ). In fact, amygdala damage leads to deficits in both facial (Adolphs ; Terburg ; Hortensius ) and body expression processing (de Gelder ) and alters connectivity with frontal, temporal and motor areas (Boes ; Hortensius ). Regarding its sensitivity to ambiguous signals, de Borst and de Gelder (2016) reported that when two faces or two bodies are presented with different emotional expressions, a deactivation of the amygdala and a reduction of cortical activity are observed in comparison to same-emotion face–body pairs. Based on previous studies of facial and bodily expressions (Meeren ; de Borst and de Gelder, 2016; Hortensius ), we expected to find higher activity in motor and prefrontal areas when seeing a face shown together with the whole body expression rather than in isolation, since bodies not only provide emotional information but also elicit motor preparation (de Gelder ). Furthermore, in situations of ambiguity created by expression mismatch between the face and the body, we hypothesized decreased activity in regions of the frontal midline and motor areas. In addition, decreased activity and modulatory influence of the amygdala would be expected for ambiguous compounds as opposed to unambiguous compounds.

Materials and methods

Participants

Eighteen healthy participants (mean age = 24.8 years; age range = 22–31 years; nine females; two left-handed participants, one of them female) took part in the study. All participants had normal or corrected-to-normal vision and a medical history without any psychiatric or neurologic disorders. Participants were informed about the task and the general safety rules of functional magnetic resonance imaging (fMRI) scanning, but remained unaware of the aim of the study. In addition, participants either received credit points or were reimbursed with vouchers after their participation in the scan session. The study was performed in accordance with the Declaration of Helsinki and all procedures followed the regulations of the Ethical Committee at Maastricht University.

Stimuli

Stimuli consisted of combined face and body expressions (i.e. face–body compounds). Fearful and happy faces were chosen from the NimStim Face Stimulus Set (Tottenham ), and fearful and happy bodies were selected from the Bodily Expressive Action Stimulus Test (de Gelder and Van den Stock, 2011). Faces were combined with bodies in order to produce either congruent or incongruent compounds (Meeren ). A congruent compound occurred when the combined face and body expressed the same emotion (e.g. happy face with a happy body) and an incongruent face–body compound was composed of a face and a body expressing different emotions (e.g. fearful face with a happy body). In addition, faces and bodies were shown separately as control stimuli with a grey oval shape replacing the face in the control body stimuli and a grey rectangle replacing the body in the control face stimuli. This was done for both fearful and happy emotions. We used grey oblongs/rectangles because we consider that headless bodies are not optimal presentations of the body information but that the overall outline of the whole body, face included, needs to be preserved. In this way, we also maintain the similarity to the compound stimuli and preserve the ‘compound’ effect (i.e. an isolated body without head or grey circle replacing the face is not a compound and therefore, it would not serve as a good control for the compound between face and body). Finally, the combination of the grey rectangle and the oval shape was used as control stimulus. Thus, a total of nine main conditions and a catch condition comprised the experiment (Figure 1). Ten unique stimuli (five males) were created for each condition. All stimuli were presented in greyscale on a grey background.
Fig. 1.

Examples of the stimuli of the main conditions employed in the experiment (catch condition not included). From left to right, up to bottom: congruent happy face–body compound (CH); incongruent face–body compound with happy body (IH); isolated happy body (BH); incongruent face–body compound with fearful body (IF); congruent fear face–body compound (CF); isolated fearful body (BF); isolated happy face (FH); isolated fearful face (FF); face–body compound control (CC).

Examples of the stimuli of the main conditions employed in the experiment (catch condition not included). From left to right, up to bottom: congruent happy face–body compound (CH); incongruent face–body compound with happy body (IH); isolated happy body (BH); incongruent face–body compound with fearful body (IF); congruent fear face–body compound (CF); isolated fearful body (BF); isolated happy face (FH); isolated fearful face (FF); face–body compound control (CC).

Experimental design and task

The experiment consisted of one scan session with two functional runs and an anatomical run. The functional runs employed a block paradigm. Each run consisted of 27 stimulation blocks and 6 oddball blocks presented in random order. The stimulation blocks (nine distinct categories, each repeated three times per run) included 10 stimuli of the same condition displayed in random order for 800 ms each, with an inter-stimulus interval of 200 ms. The oddball blocks were similar to the stimulation blocks with the exception that the fixation cross situated on the face changed into a red circle for 800 ms (i.e. the presentation duration of one stimulus within the block). The total duration of a block was 10 s and there was a time interval of 6 s between blocks. In addition to the stimulation and oddball blocks, three rest blocks of 10 s each were displayed at specific time points (after oddball/stimulation block 5, 11 and 22). During these rest blocks, no stimuli were displayed in order to counteract any possible adaptation effect. Stimuli were displayed using E-Prime 2.0 software and back-projected onto a screen (screen resolution = 1920 × 1200; screen width = 40 cm; screen height = 24.5 cm; screen diagonal = 47 cm) situated at the posterior end of the scanner bore. The participants viewed the stimuli through a mirror attached to the head coil (screen–mirror distance = 60 cm; mirror–eye distance = 15 cm approximately; total screen–eye distance = 75 cm approximately). The images were sized to 354 × 431 pixels, and the stimuli spanned 354 × 461 pixels on the screen (visual angles = 2.81 × 3.59 degrees), with a vertical face/oval-body/rectangle ratio of ≈ 1 : 7. Each stimulus presentation was synchronized to a trigger from the scanner, so every new event started synchronously with a new scan volume. Participants performed a passive oddball task (Sutton ). They were instructed to maintain fixation on a black cross located on the face, while ignoring the rest of the body. The rationale behind the selection of this task is similar to the paradigms measuring cross-modal bias, since we wanted to assess the bias from the unattended to the attended stimulus (i.e. how unattended bodies bias the processing of facial expressions), not that of measuring the ‘spontaneous’ merging of the inputs (e.g. multisensory experiments). This, together with the consecutive and fast presentation of the stimuli, leads us to believe that participants did not have enough time to make a conscious interpretation of the compound stimuli. In addition, participants were asked to pay attention to the change of the fixation cross into a red circle. In order to avoid contamination of the activation of interest by a motor response, no overt response was required during the experiment. To control for continued attention to the stimuli, participants were asked after the experiment whether they had always noticed the changing shape and colour of the fixation cross. The scanning session started when the task had been explained and the participant had understood the instructions.

fMRI data acquisition

Data were acquired with a 3 T Siemens Magnetom Prisma full-body scanner and a 64-channel head–neck coil (Siemens, Erlangen, Germany) located at the Maastricht Brain Imaging Centre of Maastricht University, the Netherlands. Participants were provided with earplugs to reduce the scanner noise and foam padding was employed to minimize head movement. Functional images of the whole brain were obtained using T2*-weighted 2D echo-planar image sequences [number of slices per volume = 64, 2 mm in-plane isotropic resolution, no gap, repetition time (TR) = 2000 ms, echo time (TE) = 30 ms, flip angle (FA) = 77°, interleaved slide acquisition order, anterior-to-posterior direction of encoding, field of view (FoV) = 200 × 200 mm2, matrix size = 100 × 100, multi-band acceleration factor = 2, number of volumes per run = 280, total scan time per run = 9 min 20 s]. A three-dimensional (3D) T1-weighted (MPRAGE) imaging sequence was used to acquire high-resolution structural images for each of the participants (1 mm isotropic resolution, TR = 2300 ms, TE = 2.98 ms, FA = 9°, FoV = 256 × 256 mm2, matrix size = 256 × 256, total scan time = 6 min 7 s).

fMRI data pre-processing

BrainVoyager QX (v2.8.4 Brain Innovation B.V., Maastricht, the Netherlands, www.brainvoyager.com) was used for the pre-processing and analysis of the acquired data. No volumes were discarded from the analyses. The pre-processing of the functional data included several steps. Sinc interpolation was employed to correct the time difference in slice acquisition of functional data within one volume. Trilinear/sinc estimation and interpolation were applied to correct for the 3D head motion of the participants with respect to the first volume of each functional run. Furthermore, high-pass temporal filtering was used to exclude low-frequency drifts in the data of two or fewer cycles per time course. For the group analysis, spatial filtering was applied to the acquired images with a Gaussian kernel of a full-width half-maximum of 4 mm. After these steps, functional time series were manually co-registered with the anatomical images and sinc-interpolated to 3D Talairach space (2 mm3 resolution) (Talairach and Tournoux, 1988). Next, all the individual anatomical datasets (in Talairach space) were segmented at the grey–white matter boundary using a semi-automatic procedure based on intensity values. The cortical surfaces were then reconstructed, inflated and mapped onto a standard sphere separately for each hemisphere. To improve the spatial correspondence between participants’ brains beyond Talairach space, the reconstructed cortices were aligned using a dynamic group averaging approach based on individual curvature information reflecting the gyral/sulcal folding pattern. After alignment, a shape-averaged (n = 18) folded cortical mesh was created for both hemispheres, which were then merged to a whole-brain folded mesh. The smoothed cortical functional time series (sampled from 0 to 3 mm into grey matter) were subsequently aligned across participants using the resulting correspondence information. All the group analyses were executed and projected on the averaged whole-brain folded mesh. The anatomical labelling of the resulting clusters was performed according to the atlas of Duvernoy (1999) on individual and averaged group whole-brain folded meshes for a more reliable localization.

General linear model analysis of fMRI data

At the group level, a random-effects general linear model (GLM) analysis was performed. For this purpose, a regression model was generated consisting of the predictors for each of the nine conditions and the one corresponding to the oddball block. The predictor time courses were convolved with a two-gamma hemodynamic response function. Moreover, z-transformed motion predictors were incorporated into the model as nuisance predictors. Eight contrasts were performed to investigate our research questions. In order to examine the emotional congruency effect between faces and bodies, the two emotionally incongruent face–body conditions were compared with the two congruent face–body conditions. In addition, the differences between congruent fear and congruent happiness were explored (CH > CF). The third aim of the study was to elucidate the effect of emotional bodies on the processing of facial expressions. For this purpose, two contrasts compared face–body compounds with similar facial emotions but different body expressions (CF > IH; CH > IF). Also, four contrasts comparing compounds to isolated faces with the same emotional expression were performed (IH > FF; IF > FH; CH > FH; CF > FF). All contrasts were corrected for multiple comparisons on the surface with a cluster-level threshold procedure based on Monte Carlo simulation (5000 iterations, alpha level = 0.05, initial P = 0.05).

Psychophysiological interaction analyses

The activity (see Supplementary Material) and modulatory role of the amygdala were investigated with respect to this experimental design given the involvement of this structure in the processing of emotional face and body expressions (Morris ; Hadjikhani and de Gelder, 2003; Vuilleumier ; Vuilleumier, 2005; Peelen ; Vuilleumier and Driver, 2007; de Gelder ). For that purpose, the right and left amygdala were defined as regions of interest for all participants in a consistent manner (see Supplementary Material). Functional connectivity between the amygdala and other brain areas was explored with psychophysiological interaction (PPI) analyses. This type of analysis aims to identify which voxels in the brain present functional coupling with a seed region of interest (physical component; in our study, the amygdala) for a given context or task (psychological component) (Friston ; O’Reilly ). For carrying out the PPI analysis, three specific variable columns (predictors) were prepared in the design matrix: (i) the time course of the seed region (physiological component); (ii) the task contrast of interest (psychological component); (iii) a predictor representing the interaction between the task and the time course of the seed region (PPI predictor) (Friston ; O’Reilly ). This interaction term was obtained by the element-by-element product of the (demeaned) seed region time course and the (mean-centred) task time course. The time course of the seed region was obtained by defining the left and right amygdalae for every participant as previously explained in this paper. In addition, the interaction between the amygdala and other areas was investigated by contrasting two conditions or two groups of conditions (A − B), instead of just contrasting one condition to baseline. Therefore, new task predictors were created for the PPI analyses, as only one task predictor can be used to generate the PPI predictor. To control for shared variance, an A + B predictor was also included in the model (O’Reilly ). All the predictors of our original GLM that were not involved in generating the PPI predictor were also included in the design matrix to avoid having a collinear model. Therefore, the PPI model included the seed region time course, the contrast of interest (A − B), the PPI predictor, the A + B predictor, the original task predictors and six motion predictors. All the predictors were convolved with the canonical hemodynamic response function. To understand the involvement of the amygdala, the same contrasts performed for the functional analyses were used for the PPI analysis: (i) congruent vs incongruent compounds; (ii) congruent happy vs congruent fear compounds (CH > CF); (iii) congruent fearful vs incongruent compounds with happy bodies (CF > IH); (iv) congruent happy vs incongruent compounds with fearful bodies (CH > IF); (v) incongruent compounds with happy bodies vs isolated fearful faces (IH > FF); (vi) incongruent compounds with fearful bodies vs isolated happy faces (IF > FH); (vii) congruent happy compounds vs isolated happy faces (CH > FH); (viii) congruent fearful compounds vs isolated fearful faces (CF > FF). These PPI analyses were performed for left and right amygdala, separately, at the group level. Correction for multiple comparisons involved a cluster-level threshold procedure based on Monte Carlo simulation (5000 iterations, alpha level = 0.05, initial P = 0.05). It is important to note that with this approach the direction of the information flow between areas cannot be interpreted, but only that there is a change in covariation between them for that specific contrast (Friston ; O’Reilly ).

Results

Functional activation

The first goal of this study was to examine the neural correlates of emotional congruency in face–body compounds. For this purpose, the two emotionally congruent compound conditions were compared with the two incongruent compound conditions (CH + CF > IH +IF, see Figure 2A). Congruent as opposed to incongruent compounds enhanced the activity of a large number of areas. These included bilateral superior frontal gyrus (SFG), cingulate cortex, precentral gyrus and sulcus, central sulcus, postcentral gyrus and sulcus and superior parietal gyrus (SPG). In the left hemisphere, increased activity was found in the anterior middle frontal gyrus (MFG) and lateral orbital gyrus, whereas in the right hemisphere in MFG, medial orbito-frontal gyrus, intraparietal sulcus (IPS) and the marginal segment of cingulate sulcus. In contrast, no significant activation was observed for incongruent compounds when compared to congruent compounds. When the two congruent conditions were compared with each other (CH > CF, see Figure 2B), only three clusters in the middle occipital gyrus (MOG), inferior temporal gyrus (ITG) and IPS of the left hemisphere responded more strongly to happy congruent compounds.
Fig. 2.

Results of the group functional activation analyses (cluster size corrected, initial P-value of 0.05). (A) The two congruent face–body compound conditions (CH + CF) are compared with the two incongruent ones (IF + IH); (B) CH > CF: congruent happy compounds vs congruent fearful compounds; (C) CF > IH: congruent fear compounds vs incongruent compounds with happy bodies; (D) CH > IF: congruent happy compounds vs incongruent compounds with fearful bodies; (E) IH > FF: incongruent compounds with happy bodies vs isolated fearful faces; (F) IF > FH: incongruent compounds with fearful bodies vs isolated happy faces; (G) CH > FH: congruent happy compounds vs isolated happy faces; (H) CF > FF: congruent fear compounds vs isolated fearful faces.

Results of the group functional activation analyses (cluster size corrected, initial P-value of 0.05). (A) The two congruent face–body compound conditions (CH + CF) are compared with the two incongruent ones (IF + IH); (B) CH > CF: congruent happy compounds vs congruent fearful compounds; (C) CF > IH: congruent fear compounds vs incongruent compounds with happy bodies; (D) CH > IF: congruent happy compounds vs incongruent compounds with fearful bodies; (E) IH > FF: incongruent compounds with happy bodies vs isolated fearful faces; (F) IF > FH: incongruent compounds with fearful bodies vs isolated happy faces; (G) CH > FH: congruent happy compounds vs isolated happy faces; (H) CF > FF: congruent fear compounds vs isolated fearful faces. Our second goal was to explore the effect of emotional bodies on the processing of facial expressions. To that end, compounds that presented the same emotion in the face but different bodily expression were compared with each other. This yielded two different comparisons: congruent fear compounds vs incongruent compounds with happy bodies (CF > IH) and congruent happy compounds vs incongruent compounds with fearful bodies (CH > IF). Whereas the latter contrast only revealed activation in the left cingulate sulcus, MOG, inferior occipital gyrus (IOG) and ITG (Figure 2D), the former comparison evoked an activation pattern that resembled the one observed in the congruent vs incongruent compound contrast (Figure 2C). Particularly, congruent fear compounds as opposed to incongruent compounds with happy bodies showed significant increase in response in left inferior frontal sulcus (IFS), MFG, lateral orbital gyrus, as well as MFG, IPS and the right marginal segment of cingulate sulcus. Also, an increase in activity was found bilaterally in SFG, superior frontal sulcus (SFS), MFG, cingulate cortex, precentral gyrus and sulcus, central sulcus, postcentral gyrus and sulcus and SPG. Four further contrasts looked at the effect of bodily expressions on the processing of the face. In these comparisons, compounds were contrasted with isolated faces that presented matching emotional expressions to the faces in the compounds. The first comparison yielded more activity for incongruent compounds with happy bodies as opposed to fearful faces (IH > FF, see Figure 2E) in bilateral medial orbital gyrus, inferior frontal gyrus (IFG), fusiform gyrus (FG), ITG, IOG, MOG, gyrus descendens, calcarine sulcus, lingual gyrus (LG) and cuneus. In the left hemisphere, significantly increased activity occurred in the medial frontopolar gyrus, precentral gyrus and isthmus, while in the right hemisphere higher activity was observed in gyrus rectus, MFG and superior temporal sulcus (STS). When comparing incongruent compounds with fearful bodies with isolated happy faces (IF > FH, see Figure 2F), enhanced activity was revealed in bilateral ITG, FG, IOG, MOG, gyrus descendens, LG, calcarine sulcus, cuneus, left SFG and left isthmus. The opposite contrast showed two clusters in the left cingulate sulcus and parieto-occipital incisure. The comparison between congruent happy compounds and isolated happy faces (CH > FH, see Figure 2G) significantly activated the left SFG, SFS and the superior part of precentral sulcus. Bilaterally, higher activity was also observed in ITG, FG, MOG, gyrus descendens, calcarine sulcus, cuneus and LG for congruent happy compounds. The last of these contrasts compared congruent fear compounds with isolated fearful faces (CF > FF, see Figure 2H). Although no brain regions showed preference for the fearful faces, significant increase in BOLD response occurred for the congruent fear compounds in bilateral IOG, MOG, ITG, gyrus descendens, LG, calcarine sulcus, cuneus, isthmus and FG. Also, some clusters in the left hemisphere were found in the superior and medial frontopolar gyrus, MFG, cingulate gyrus and inferior precentral gyrus and sulcus, whereas in the right hemisphere in STS, middle temporal gyrus (MTG) and angular gyrus.

PPI analyses

Ten PPI analyses were performed with the right and left amygdala as seed regions to investigate its task-dependent interactions with other brain areas. In the first PPI analysis, the comparison between congruent and incongruent face–body compounds revealed increased coupling between the left medial part of SFG, cingulate sulcus and the right amygdala for congruent stimuli as opposed to incongruent (Figure 3A). For the same PPI analysis, no significant functional interaction was found between the left amygdala and other brain areas. When contrasting the two congruent compound conditions with each other (CH > CF, see Figure 3B), task-dependent coupling was observed between the right amygdala and bilateral FG, LG, posterior cingulate gyrus, right SFG, IOG and left anterior superior temporal gyrus (STG). All these areas displayed more correlated activity with the seed region for congruent fear than for congruent happy compounds. For the same comparison, the left amygdala showed increased functional interaction with the right SFG, also for congruent fear compounds as opposed to happy compounds.
Fig. 3.

Results of the group PPI analyses for both the right and left amygdalae (cluster size corrected, initial P-value of 0.05). (A) The two congruent face–body compound conditions (CH + CF) are compared with the two incongruent ones (IF + IH); (B) CH > CF: congruent happy compounds vs congruent fearful compounds; (C) CF > IH: congruent fear compounds vs incongruent compounds with happy bodies; (D) CH > IF: congruent happy compounds vs incongruent compounds with fearful bodies; (E) IH > FF: incongruent compounds with happy bodies vs isolated fearful faces; (F) IF > FH: incongruent compounds with fearful bodies vs isolated happy faces; (G) CH > FH: congruent happy compounds vs isolated happy faces; (H) CF > FF: congruent fear compounds vs isolated fearful faces.

Results of the group PPI analyses for both the right and left amygdalae (cluster size corrected, initial P-value of 0.05). (A) The two congruent face–body compound conditions (CH + CF) are compared with the two incongruent ones (IF + IH); (B) CH > CF: congruent happy compounds vs congruent fearful compounds; (C) CF > IH: congruent fear compounds vs incongruent compounds with happy bodies; (D) CH > IF: congruent happy compounds vs incongruent compounds with fearful bodies; (E) IH > FF: incongruent compounds with happy bodies vs isolated fearful faces; (F) IF > FH: incongruent compounds with fearful bodies vs isolated happy faces; (G) CH > FH: congruent happy compounds vs isolated happy faces; (H) CF > FF: congruent fear compounds vs isolated fearful faces. The third PPI analysis yielded higher functional connectivity between the left amygdala and left insula for congruent fear compounds as opposed to incongruent compounds with happy bodies (CF > IH, see Figure 3C). For the opposite contrast, left STS and right anterior cingulate cortex (ACC), ventromedial prefrontal cortex (vmPFC) and medial orbito-frontal cortex (mOFC) showed more correlated activity with left amygdala. The same comparison for the right amygdala revealed higher task-dependent connectivity with middle temporal sulcus (MTS) and ITG for congruent fear compounds, whereas increased coupling for incongruent compounds with happy bodies was found with vmPFC. When comparing congruent happy compounds with incongruent compounds with fearful bodies (CH > IF, see Figure 3D), greater functional interaction for the incongruent compounds was observed between the left amygdala and right SFG, insula, STS, MTG and the superior part of the postcentral sulcus. For the same contrast, right amygdala also showed stronger correlated activity for the incongruent compounds with fearful bodies with the left posterior cingulate gyrus. The next PPI analysis examined the functional connectivity of the amygdala when incongruent compounds with happy bodies were compared with isolated fearful faces (IH > FF, see Figure 3E). Increased connectivity was found between left amygdala and left IPS and STG, and between right amygdala and ITG, MTS, calcarine sulcus and parieto-occipital incisure for the isolated fearful faces. The comparison between incongruent compounds with fearful bodies and isolated happy faces (IF > FH, see Figure 3F) revealed higher coupling for the isolated face condition between right amygdala and left subgenual cingulate, left paracentral lobule, right temporal pole, right gyrus rectus and right medial orbital gyrus. This specific increase in correlated activity was also observed between left amygdala and left cuneus and right STS, whereas the coupling between left amygdala and SFG increased for isolated happy faces. The PPI analysis contrasting congruent happy compounds with isolated happy faces (CH > FH, see Figure 3G) yielded higher functional interaction between the left amygdala and left medial orbital gyrus, right SFG, precentral gyrus, central sulcus, postcentral sulcus and anterior cingulate sulcus. Right amygdala also displayed increased functional connectivity for congruent happy compounds as opposed to isolated faces with left MTG, paracentral lobule, posterior cingulate sulcus, right anterior cingulate sulcus and gyrus. When comparing congruent fear compounds with isolated fearful faces (CF > FF, see Figure 3H), no significant task-dependent connectivity was found between right amygdala and other areas. However, left amygdala presented higher functional coupling with bilateral IPS, left IOG and gyrus descendens, right angular gyrus, gyrus rectus, vmPFC and mOFC for the isolated fearful face condition. Congruent fear compounds only elicited higher functional interaction between left amygdala and anterior cingulate gyrus.

Discussion

We examined the neural correlates of the perception of emotional face–body compounds and the involvement of the amygdala in the perception of combined face and body expressions. Functional activation analyses revealed that motor, frontal and visual areas increase their activity when faces were presented together with bodies rather than in isolation. In contrast, functional coupling of the amygdala with other areas was enhanced for isolated faces in comparison to face–body compounds. We also found that an emotional face combined with a congruent body posture enhanced cortical activity and the modulatory activity of the amygdala when compared to an incongruent face–body compound.

Effect of congruency in face–body compounds and involvement of the amygdala

In line with our hypothesis, higher activity was observed in motor areas in response to emotionally congruent face–body compounds as opposed to incongruent ones (Figure 2A). Other brain areas also showed higher activity for unambiguous compounds. These included regions of the primary and associative somatosensory cortex, ACC, superior parietal lobule (SPL), dorsolateral prefrontal cortex (DLPFC) and OFC. Consistent with previous work (Rudrauf ; de Borst and de Gelder, 2016; Hortensius ), no brain areas exhibited higher activity for ambiguous face–body compounds. One possible explanation for this effect, as previously suggested in the literature (Rudrauf ; Janak and Tye, 2015; de Borst and de Gelder, 2016), could be that conflicting information leads to concurrent reciprocal inhibition of subcortical structures, like the amygdala, consequently reducing their modulatory influence over cortical areas. On the contrary, unambiguous emotional information would not elicit this mutual inhibition, allowing an enhancement of brain activity. This was indeed the case for the results of the functional analysis, but also of the PPI analysis, since increased functional coupling was found between the right amygdala and left dorsal ACC and medial part of SFG for congruent as opposed to incongruent compounds (Figure 3A). However, given the intrinsic properties of the fMRI data, no definite conclusions can be drawn about the excitatory or inhibitory nature of these processes or their directionality. Additionally, we explored the effect of each emotion category on the processing of congruent face–body compounds (CH > CF, Figure 2B). In line with the existing literature, higher response was found for congruent happy compounds in primary and associative visual cortices in comparison to congruent fearful compounds (de Borst and de Gelder, 2016). This may be due to the fact that the arms were more extended across the visual field in happy bodies than in fearful bodies, activating arm-selective areas more strongly (Taylor ; de Borst and de Gelder, 2016). Interestingly, no effect on motor structures was found for fearful compounds in the functional analysis, where the planning and execution of an adaptive action may be required (de Gelder ), but a clear effect was observed in the results of the PPI analysis (CH > CF, Figure 3B). Fearful compounds, as opposed to happy ones, increased the functional coupling between the right amygdala and pre-supplementary motor area (preSMA), posterior cingulate cortex (PCC), anterior STG and visual areas. Likewise, the left amygdala presented stronger functional connectivity with preSMA and PCC. This suggests that in the presence of a clear threat signal, interaction between the amygdala and these areas might be related to a correct interpretation of the social and emotional cues conveyed by face–body compounds (Wang ) in order to sustain an adaptive pattern of behaviour (de Gelder, 2006; Grèzes ; Wang ). These findings, together with previous work, indicate that when faces and bodies are presented together, an initial evaluation of their emotional congruency takes place (Meeren ), determining whether this information is further processed and a behavioural response is produced (Rudrauf ; de Borst and de Gelder, 2016). In the case of congruent face–body compounds, there is no rivalry between the emotional information of the face and the body, driving the activity of both the amygdala and cortical structures more strongly. Specifically, our results suggest that somatosensory, prefrontal, motor, superior parietal areas and the amygdala are essential in the processing of unambiguous face–body compounds. For instance, the activity of SPL might be involved in providing a correct body representation, by integrating the information of the current body state with other sensory input, like visual information (Wolpert ; Haggard ; Peelen and Downing, 2007). The activity of the ACC and OFC might also play a role in giving behavioural meaning to the perceived emotional face–body compounds, since these areas are known to be involved in emotional behaviour and also in the monitoring of the internal emotional state (Devinsky ; Rolls, 2004; Beer ). These regions, together with the DLPFC, the dorsal part of ACC and amygdala, could control for an adequate response to the given situation, so an appropriate motor plan can be finally prepared in the motor cortex (Devinsky ; Adolphs, 2002; Fusar-Poli ). In the case of a clear potential threat (i.e. congruent fear compound) the amygdala has an essential role in orchestrating an appropriate response, as shown by its increased functional connectivity with somatosensory, visual, emotion and executive structures after the presentation of congruent fearful compounds.

Effect of emotional bodies on the processing of facial expressions and involvement of the amygdala

Besides the investigation of the effect of emotional congruency between combined faces and bodies, this study aimed to elucidate the effect of emotional body postures on the processing of facial expressions. The functional activation analysis revealed that motor, prefrontal and visual areas responded more strongly when faces are presented together with bodies rather than in isolation. This finding is in line with existing literature stating that bodies not only provide emotional and social information, but also emotion-related action intentions (de Gelder ; Pichon ; Goldberg ). The observation of action cues may trigger regions of the motor cortex, in charge of the elaboration of a motor plan in response to that information. The concurrent activation of prefrontal areas could also be necessary for the regulation, control and/or inhibition of the motor plan to have an appropriate response to the given situation. Of special interest were the results obtained for the conditions that presented fearful bodies. We found increased responses in posterior STS, inferior parietal lobule, DLPFC and ventrolateral prefrontal cortex for fearful bodies, whereas isolated happy bodies only activated visual areas significantly more than fearful bodies (BH > BF, Supplementary Figure S1B). Yet this effect of fear was not observed in the case of isolated faces, since no specific activations were found for fearful faces when compared to happy faces (FH > FF, Supplementary Figure S1C). Moreover, interesting findings resulted from the face–body compounds that presented fearful bodies. Adding a fearful body to a fearful face increased the activity of motor and executive regions (CF > FF, Figure 2H). A similar result was also observed from the comparison between two compounds that had fearful faces but one of them presented a congruent fearful body while the other an incongruent happy body (CF > IH, Figure 2C). Even when both faces were fearful, only the combination with a fearful body recruited motor and prefrontal areas significantly more than with a happy body. More interestingly, the addition of a fearful body to a happy face also triggered the activity of the preSMA and DLPFC (IF > FH, Figure 2F). Therefore, the presence of fearful bodies can elicit the preparation of motor sequences (de Gelder ; Grèzes ), even when outside of the focus of attention and in emotional incongruence with the face. This suggests that the processing of the face can be biased towards the expression conveyed in the bodies, supporting previous behavioural findings (Meeren ; Van den Stock and de Gelder, 2014). Regarding the PPI analyses, the amygdala showed higher modulation in the processing of isolated faces in comparison to compounds (Figure 3E–H). This finding is in line with previous literature supporting the key role of the amygdala in face processing (Morris , 1998; Vuilleumier ; Vuilleumier, 2005; Peelen and Downing, 2007; Vuilleumier and Driver, 2007). However, the comparison between compounds with matching facial expressions but different bodily emotions (Figure 3C and D) revealed a functional connectivity pattern between the amygdala and other brain regions that could not have been based solely on the emotion of the face, since the facial expression was the same in the compared compounds. For these comparisons, fearful bodies caused more correlated activity between the amygdala and areas related to the monitoring and representation of bodily states (insula) (Critchley, 2005; Karnath ) and scenes (SPL) (Haggard ; Peelen and Downing, 2007), motor preparation (preSMA), emotional and social processing (anterior temporal lobe and PCC) (Wang ), regardless of the emotion conveyed by the face. Thus, although previous literature has mainly focused on the role of amygdala in face processing, this structure is also involved in the signalling of other behaviourally relevant cues (Vuilleumier and Driver, 2007), such as the encounter with a fearful body (Peelen and Downing, 2007).

Conclusions

Our functional activation analysis showed that motor, prefrontal and visual areas respond more strongly when faces are presented together with bodies rather than in isolation. This supports the notion that body postures not only provide emotional and social information but also emotion-related action intentions. Furthermore, our results revealed that emotional body postures influence the processing of facial expressions, even when outside the focus of attention. For instance, an emotional face combined with a congruent body enhances the activity of cortical areas and the modulatory activity of the amygdala when compared to an incongruent face–body compound. Specifically, somatosensory, prefrontal and premotor areas seem to play a role in the processing of unambiguous face–body compounds, monitoring interoceptive signals and emotional processes to control for an adequate motor response. In addition, the interaction of the amygdala with prefrontal, temporal, motor and visual areas might be important for the correct assessment of the emotional content and level of congruency, and for the preparation of behavioural action. In the case of an incongruent face–body compound, body postures can also bias the processing of the face. For instance, fearful bodies increased the activity of motor areas even when the emotion in the face was happy.

Funding

This work was supported by the European Research Council (ERC) under the European Union’s Seventh Framework Programme for Research 2007–13 (ERC Grant agreement number 295673).

Supplementary data

Supplementary data are available at SCAN online. Conflict of interest. None declared. Click here for additional data file.
  48 in total

1.  How brains beware: neural mechanisms of emotional attention.

Authors:  Patrik Vuilleumier
Journal:  Trends Cogn Sci       Date:  2005-11-10       Impact factor: 20.229

2.  Clear signals or mixed messages: inter-individual emotion congruency modulates brain activity underlying affective body perception.

Authors:  A W de Borst; B de Gelder
Journal:  Soc Cogn Affect Neurosci       Date:  2016-03-28       Impact factor: 3.436

3.  The role of the basolateral amygdala in the perception of faces in natural contexts.

Authors:  Ruud Hortensius; David Terburg; Barak Morgan; Dan J Stein; Jack van Honk; Beatrice de Gelder
Journal:  Philos Trans R Soc Lond B Biol Sci       Date:  2016-05-05       Impact factor: 6.237

4.  Maintaining internal representations: the role of the human superior parietal lobe.

Authors:  D M Wolpert; S J Goodbody; M Husain
Journal:  Nat Neurosci       Date:  1998-10       Impact factor: 24.884

5.  A neuromodulatory role for the human amygdala in processing emotional facial expressions.

Authors:  J S Morris; K J Friston; C Büchel; C D Frith; A W Young; A J Calder; R J Dolan
Journal:  Brain       Date:  1998-01       Impact factor: 13.501

Review 6.  From circuits to behaviour in the amygdala.

Authors:  Patricia H Janak; Kay M Tye
Journal:  Nature       Date:  2015-01-15       Impact factor: 49.962

7.  Holistic person processing: faces with bodies tell the whole story.

Authors:  Hillel Aviezer; Yaacov Trope; Alexander Todorov
Journal:  J Pers Soc Psychol       Date:  2012-02-20

8.  Tools of the trade: psychophysiological interactions and functional connectivity.

Authors:  Jill X O'Reilly; Mark W Woolrich; Timothy E J Behrens; Stephen M Smith; Heidi Johansen-Berg
Journal:  Soc Cogn Affect Neurosci       Date:  2012-05-07       Impact factor: 3.436

9.  Functional MRI analysis of body and body part representations in the extrastriate and fusiform body areas.

Authors:  John C Taylor; Alison J Wiggett; Paul E Downing
Journal:  J Neurophysiol       Date:  2007-06-27       Impact factor: 2.714

Review 10.  The neural basis of visual body perception.

Authors:  Marius V Peelen; Paul E Downing
Journal:  Nat Rev Neurosci       Date:  2007-08       Impact factor: 34.870

View more
  6 in total

1.  "Grumpy" or "furious"? arousal of emotion labels influences judgments of facial expressions.

Authors:  Megan S Barker; Emma M Bidstrup; Gail A Robinson; Nicole L Nelson
Journal:  PLoS One       Date:  2020-07-01       Impact factor: 3.240

2.  A sad thumbs up: incongruent gestures and disrupted sensorimotor activity both slow processing of facial expressions.

Authors:  Adrienne Wood; Jared D Martin; Martha W Alibali; Paula M Niedenthal
Journal:  Cogn Emot       Date:  2018-11-15

3.  Dynamic Interactions between Emotion Perception and Action Preparation for Reacting to Social Threat: A Combined cTBS-fMRI Study.

Authors:  Tahnée Engelen; Minye Zhan; Alexander T Sack; Beatrice de Gelder
Journal:  eNeuro       Date:  2018-07-02

4.  Out of Context, Beyond the Face: Neuroanatomical Pathways of Emotional Face-Body Language Integration in Adolescent Offenders.

Authors:  Hernando Santamaría-García; Agustin Ibáñez; Synella Montaño; Adolfo M García; Michel Patiño-Saenz; Claudia Idarraga; Mariana Pino; Sandra Baez
Journal:  Front Behav Neurosci       Date:  2019-02-26       Impact factor: 3.558

5.  Effect of Acute Physical Exercise on Executive Functions and Emotional Recognition: Analysis of Moderate to High Intensity in Young Adults.

Authors:  Haney Aguirre-Loaiza; Jaime Arenas; Ianelleen Arias; Alejandra Franco-Jímenez; Sergio Barbosa-Granados; Santiago Ramos-Bermúdez; Federico Ayala-Zuluaga; César Núñez; Alexandre García-Mas
Journal:  Front Psychol       Date:  2019-12-20

6.  Recognition Characteristics of Facial and Bodily Expressions: Evidence From ERPs.

Authors:  Xiaoxiao Li
Journal:  Front Psychol       Date:  2021-07-05
  6 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.