Literature DB >> 25800210

Responsibility modulates pain-matrix activation elicited by the expressions of others in pain.

Fang Cui1, Abdel-Rahman Abdelgabar2, Christian Keysers3, Valeria Gazzola4.   

Abstract

Here we examine whether brain responses to dynamic facial expressions of pain are influenced by our responsibility for the observed pain. Participants played a flanker task with a confederate. Whenever either erred, the confederate was seen to receive a noxious shock. Using functional magnetic resonance imaging, we found that regions of the functionally localized pain-matrix of the participants (the anterior insula in particular) were activated most strongly when seeing the confederate receive a noxious shock when only the participant had erred (and hence had full responsibility). When both or only the confederate had erred (i.e. participant's shared or no responsibility), significantly weaker vicarious pain-matrix activations were measured.
Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

Entities:  

Keywords:  Empathy; Pain; Responsibility; fMRI

Mesh:

Year:  2015        PMID: 25800210      PMCID: PMC4461309          DOI: 10.1016/j.neuroimage.2015.03.034

Source DB:  PubMed          Journal:  Neuroimage        ISSN: 1053-8119            Impact factor:   6.556


Introduction

Perceiving the facial expressions of pain in others has important social functions. In particular, perceiving the pain of others motivates and regulates helping behavior (Craig et al., 2001; Williams, 2002). Over the past decade, our understanding of the neural basis of this perception has been refined by a number of experiments that have exposed participants to the facial expressions of pain of others. After reviewing this evidence, we will show that a common feature of these experiments has been to show expressions of pain that were not caused by the participant him or herself. Accordingly, we will argue that an important aspect of the neural basis of pain perception has been left unexplored: how this neural activation is modulated by the degree to which the observer caused the witnessed pain. There is a long tradition of studying the neural basis of the visual processing of facial expressions in general. The observation of facial expression triggers activity in early visual cortex, in the human occipital face area (OFA) and in the middle temporal gyrus along the superior temporal sulcus (STS) (see Said et al., 2011 for a review). Additionally, facial expressions activate the frontal operculum, supplementary motor area (SMA) and somatosensory cortices that are also activated when participant produce facial expressions. The vicarious activation to viewing others' facial expressions of these sensorimotor brain regions has thus been interpreted as representing an internal simulation of the sensorimotor neural activity associated with producing the observed facial expressions (Bastiaansen et al., 2009; Said et al., 2011; van der Gaag et al., 2007). When witnessing facial expressions of pain, participants have been shown to additionally activate regions of the anterior insula (AI), anterior cingulate cortex (ACC) and the amygdala (Botvinick et al., 2005; Saarela et al., 2007; Simon et al., 2006). Because the AI, ACC and amygdala are part of the pain-matrix — the set of brain regions that are activated when the participants themselves are exposed to noxious stimuli on their body (Garcia-Larrea and Peyron, 2013; Melzack and Wall, 1965; Mouraux et al., 2011) — and because their level of activation during the experience of pain correlates with the unpleasantness of experienced pain (Rainville, 2002), many interpret their vicarious activation while witnessing the pain of others as the neural correlate of empathy — feeling vicariously what we see someone else to experience (Corradi-Dell'Acqua et al., 2011; Jackson et al., 2006; Koban et al., 2013; Lamm et al., 2011; Singer et al., 2004). That AI, ACC and amygdala are also vicariously activated when pain is perceived through non-facial cues (Corradi-Dell'Acqua et al., 2011; Jackson et al., 2006; Koban et al., 2013; Lamm et al., 2011; Meffert et al., 2013; Singer et al., 2004) supports the notion that these activations have less to do with the facial expressions as a motor act and more with pain as a perceived emotion. That their activation is stronger in more empathic individuals (Singer et al., 2004) and weaker in psychopaths (Meffert et al., 2013) further supports their role in empathy. Interestingly, the magnitude of vicarious activations in the AI also predicts helping behavior (Hein et al., 2010) providing evidence that vicarious activations could have behavioral significance by motivating the witness to help another. It should be noted, however, that experiencing negative emotions other than pain also activates regions such as the amygdala, AI and ACC. For instance the experience of disgust recruits all of these regions (Wicker et al., 2003), and so does the emotion of guilt (Jankowski and Takahashi, 2014). Accordingly, vicarious activations in these regions cannot unambiguously be interpreted as representing vicarious pain, but could involve a mixture of emotions, such as concern or distress. Need for caution in interpreting activity in these regions as evidence for the witness experiencing pain in the strict physical sense while witnessing the pain of others is further commanded by the fact that brain regions that track physical pain intensity most accurately during first hand pain experience (Wager et al., 2013) — although including some of the regions involved during vicarious pain (e.g. ACC) — do include numerous brain regions not typically involved during vicarious pain (e.g. mid-insula, SII, thalamus and cerebellum). Indeed, even during the experience of physical pain, not all activity in the so-called pain-matrix may be linked to a specific feeling of pain, but may rather represent a mix of processes related to salient negative events (Legrain et al., 2011). Some factors influencing the intensity of vicarious activations have received much interest (de Vignemont and Singer, 2006): vicarious activations are stronger when attention is directed to the pain (Gu and Han, 2007), when stimuli are more realistic (Gu and Han, 2007), when the observer is socially closer to the pain-taker (Cheng et al., 2010), belongs to the same group or race (Avenanti et al., 2010; Azevedo et al., 2013; Hein et al., 2010; Xu et al., 2009) or considers the pain-taker fair (Singer et al., 2006). Except for the work of Koban and colleagues (Koban et al., 2013) showing that the AI and dorsolateral prefrontal cortex differentiate between noxious and innoxious stimulation to another individual when the participant caused the pain by erring, but not when the other individual had caused his own pain, so far, brain activity following the perception of other people's pain has however been mostly studied in situations in which the participant witnesses pain he did not cause. Whether responsibility for the observed pain would boost the way the brain reacts to facial expressions of pain thus remains largely unknown. Based on personal experience that witnessing pain we caused is more distressing than witnessing pain we have not caused, we hypothesized that increasing levels of responsibility for observed pain should boost brain activity, particularly in regions also involved in the first-hand experience of pain (AI, ACC and amygdala). In addition, based on the diffusion of responsibility literature (Darley and Latane, 1968), we hypothesize that if the cause of the pain is shared amongst agents, vicarious activations should be reduced compared to cases in which the witness was the sole cause of the pain. To test these hypotheses, a participant and author FC performed a difficult flanker task simultaneously, and if either or both made a mistake, FC was administered a noxious shock. In some trials the participant was thus fully responsible for causing FC's pain, in some the participant and FC shared responsibility, and in others, only FC was responsible for her own pain. We then measured, using fMRI, how brain activity in the participant's pain-matrix varied while witnessing the pain of FC, via a (supposedly live) video-feed of her facial expressions, as a function of responsibility.

Materials and methods

Participants

Thirty-three volunteers participated in this study, but three were excluded from the analyses: one because s/he felt claustrophobic and the other two because they said they didn't believe that the person in the movie received electroshocks in real time. It is known, that responses to other people's pain can change as a function of gender (Singer et al., 2006) and ethnicity (Azevedo et al., 2013; Xu et al., 2009). To ensure that our findings are not limited to a specific gender or ethnicity, we therefore recruited our 30 final participants to be composed of a balanced number of male and female and Chinese and Caucasian (i.e. 15 Chinese, of which 7 males, and 15 Caucasians, of which 7 males). The age of our participants was 24.8 y ± 4.37 (mean ± s.d.). All participants were healthy, right-handed, had no history of neurological or psychiatric disorders, and provided a written informed consent. This study was approved by the Ethics Committee of the University of Amsterdam, the Netherlands.

General experimental setup

A confederate design was used in this study. At the beginning of the experiment, experimenter AR introduced the participant to author FC (Chinese female), who was described as another participant. The participant and FC then drew lots that were manipulated so that the participant was always assigned to the fMRI scanning. In the scanner, each trial (see Fig. 1a for a graphical illustration of each trial) started by displaying a central target letter flanked by distractors and the participant had to press the button corresponding to the central letter. This first epoch is called the Flanker-epoch. The participant was led to believe that FC would have seen, simultaneously but in another room, the same display and performed the same task. Directly after the Flanker-epoch, the participant and FC were informed about the performance of both players (Feedback-epoch). If both performed the task correctly, the participant believed FC would receive a weak, innoxious electroshock on her right hand (NoPain condition). If any erred (i.e. only the participant, both, or only FC) a stronger, noxious shock would be delivered to FC (FullResp, SharedResp or NoResp conditions, respectively, see Table 1). The participant was further led to believe that he/she would then, after a random blank interval, see in real time (through a CCTV) FC receive the electroshock, be it noxious or innoxious, depending on their joint performance (Video-epoch). A 5 to 8 s blank screen separated consecutive trials (Fig. 1a). In reality, FC was not performing the task or receiving shocks during the experiment. Instead, a computer adjusted the presentation time of the flanker task, and hence its difficulty, and simulated correct or incorrect performance of FC to ensure a minimum number of trials for each condition (Table 1, last column). During the Video-epoch pre-recorded videos of FC receiving electroshocks were shown to the participant to ensure that all participants viewed the same movies. FC only received shocks during movie recording. A total of 140 trials were presented, split in 4 runs of 35 trials each. As mentioned above, because the amount of trials for each condition depended on subject's performance in the flanker task, the number of trials differed slightly across conditions and participants and the percentage of trial presented on average for each condition is indicated in Table 1. It is Important to note that a GLM comparing two conditions is valid even if the two conditions have different numbers of trials, and our main contrast (FullResp–SharedResp) includes conditions with similar numbers of trials.
Fig. 1

(a–b) Experimental task. Trial structure for (a) the responsibility task, with a screenshot taken from one of the painful videos (see also Movies 1 and 2); and (b) the Pain-localizer session. (c–g) Whole brain results. (c) Whole brain effects of responsibility on the processing of the pain of others. Purple: VideoFullResp–VideoNoResp. Green: VideoFullResp–VideoSharedResp. Yellow: overlap between the other two colors shown in the same render. (d) Axial cuts at the indicated z coordinates for the VideoFullResp–VideoSharedResp contrasts. Colors go from dark red for t = 3.13 to white for t > 5. (e) Overlap between the effects of responsibility and the pain localizer. Red(+ yellow): Pain-localizer. Green(+ yellow): VideoFullResp–VideoSharedResp. Yellow: overlap. (f) Interaction between responsibility and the Feedback- and Video-epochs (VideoFullResp–VideoSharedResp)–(FeedbackFullResp–FeedbackSharedResp). Thresholds and color code as for (d). (g) Effect of viewing painful facial expressions independently of responsibility and its overlap with the pain localizer. Blue(+ yellow): (VideoFullResp + VideoSharedResp + VideoNoResp)–3VideoNoPain. Red(+ yellow): Pain-localizer. Yellow: overlap. Colors go from t = 3.13 to t = 8. All the images shown from c to g were thresholded at p < 0.001, k > 10 and survived q < 0.05. (h) ROI results. Signal extracted, for the indicated conditions against baseline, from the 11 clusters (shown in 1e and listed in Table 1) resulting from the contrast VideoFullResp–VideoSharedResp masked with the Pain-localizer.

(a–b) Experimental task. Trial structure for (a) the responsibility task, with a screenshot taken from one of the painful videos (see also Movies 1 and 2); and (b) the Pain-localizer session. (c–g) Whole brain results. (c) Whole brain effects of responsibility on the processing of the pain of others. Purple: VideoFullResp–VideoNoResp. Green: VideoFullResp–VideoSharedResp. Yellow: overlap between the other two colors shown in the same render. (d) Axial cuts at the indicated z coordinates for the VideoFullResp–VideoSharedResp contrasts. Colors go from dark red for t = 3.13 to white for t > 5. (e) Overlap between the effects of responsibility and the pain localizer. Red(+ yellow): Pain-localizer. Green(+ yellow): VideoFullResp–VideoSharedResp. Yellow: overlap. (f) Interaction between responsibility and the Feedback- and Video-epochs (VideoFullResp–VideoSharedResp)–(FeedbackFullResp–FeedbackSharedResp). Thresholds and color code as for (d). (g) Effect of viewing painful facial expressions independently of responsibility and its overlap with the pain localizer. Blue(+ yellow): (VideoFullResp + VideoSharedResp + VideoNoResp)–3 × VideoNoPain. Red(+ yellow): Pain-localizer. Yellow: overlap. Colors go from t = 3.13 to t = 8. All the images shown from c to g were thresholded at p < 0.001, k > 10 and survived q < 0.05. (h) ROI results. Signal extracted, for the indicated conditions against baseline, from the 11 clusters (shown in 1e and listed in Table 1) resulting from the contrast VideoFullResp–VideoSharedResp masked with the Pain-localizer.

Table 1

Experimental design and conditions. From left to right: participant's and author FC's flanker task performance; condition name from participant's responsibility point of view; type of electrical stimulation given to FC during video-recording for each condition; average number of trials for each condition included in the analysis expressed in % and in absolute number. Note that we had at least 17 repetitions of each of the four conditions in all participants.

Participant's performanceFC's performanceCondition nameElectrical stimulation% of trials (average ± s.e.m.)Average amount of trials
CorrectCorrectNoPainInnoxious32.4 ± 0.8245
IncorrectCorrectFullRespNoxious18.6 ± 0.7126
IncorrectIncorrectSharedRespNoxious17.8 ± 0.6925
CorrectIncorrectNoRespNoxious31.2 ± 0.7144
To localize regions involved in the painfulness of the participant's own nociceptive experiences, after the main experiment, the participant went through another fMRI scanning session (Pain-localizer) in which noxious and innoxious electroshocks were delivered to the participant's right hand in the scanner (see Fig. 1b and Pain-localizer procedure paragraph). An anatomical scan was finally acquired. After all the scanning, a debriefing was given and the participant was asked to indicate how much he/she believed that FC was actually being shocked based on their joint performance during the experiment.

Flanker-task in details

During the Flanker-epoch, one of the following five-letter-strings appeared on the screen randomly: “HHHHH”, “LLLLL”, “HHLHH” and “LLHLL”. In order to achieve a similar task difficulty across participant, the duration of each string was initially set at 150 ms, and was changed to reach a minimum of 100 ms and a maximum of 200 ms based on the participant's previous performance: if the participant gave two consecutive correct responses, the time of the next string was shortened by 10 ms; if the participant gave two consecutive incorrect responses the time was prolonged by 10 ms. The participant and author FC were instructed to simultaneously respond to the letter in the middle (H or L). To give the response the participant had to press one of two pre-assigned buttons on an MRI-compatible button-box. The fake setup for FC required her to press the “H” and “L” buttons on a keyboard. As mentioned above, in reality only the participant was running the task and he/she had 1.5 s to give a response. If no button press was recorded within this duration, the participant's performance in this trial would be considered incorrect, and indicated as such during Feedback-epoch.

Videos preparation

The videos used in this study showed author FC seated at a table. Her face and upper body were clearly visible, and two electrodes, used to deliver the electroshock, were attached and visible on the back of her right hand, which was resting in front of her on the table (see Fig. 1a for an example of video screenshot). During the video recording, we first tested FC's pain threshold (see section “Stimulation and Pain Threshold” for details). Afterwards 70 unique video-clips were recorded while FC received the noxious or innoxious electroshocks (35 videos each). Each video was cut to last 1.5 s and started with 0.3 s in which the experimenter was sitting still with a neutral face, followed by 0.5 s of electroshock to trigger FC's natural facial expression. All the settings, including the background and FC's look, were kept unchanged relative to the recording day during all the experimental days. For each of the 4 runs, at the end of each trial a movie was randomly picked (without replacement) from the 35 movies of the appropriate category (painful/painless), so that no movie would be seen twice per run.

Pain-localizer procedure

Sixteen noxious and sixteen innoxious 0.5 s electroshocks were applied, in a pseudo-randomized order (i.e. no more than two shocks of the same intensity were delivered consecutively), on the participant's right hand using a MRI-compatible electrical stimulation system. After a random interval ranging from 2 to 5 s, the participant was asked to evaluate how painful the received electroshock was by button press. The participant was instructed to use three buttons of a MRI compatible button-box placed next to their left hand. Two buttons were used to move the slider left and right on the visual scale on the screen and the third button was for confirmation. The pain intensity scale was a 10 point scale (1: not painful at all; 10: most intense imaginable pain), with the starting point set randomly for each trial to disentangle the number of button presses from the rating (Fig. 1b). A random interval ranging from 8 to 12 s separated the response from the next stimulation.

Electrical stimulation and pain threshold

A 100 Hz train of electrical pulses (2 ms each) was applied for 0.5 s using an MRI-compatible electrical stimulator attached on the back of the right hand on the 4th musculus interossei (stimulation area: 16 mm2) through two bipolar surface electrodes. Before the scanning we measured the pain threshold from both FC and the participant. We started from a 0.2 mA current that was then increased until maximally 6.0 mA in 0.1 mA steps (Singer et al., 2004). Participants were instructed to evaluate how painful the stimulation was on a 10-point scale (same as in Pain-localizer). We then chose the current corresponding to a rating of 7 for the painful condition and of 2 for the painless condition (Singer et al., 2004). The current selected was 0.75 ± 0.14 mA (mean ± s.e.m) for the painless and 2.12 ± 0.77 mA for the painful condition. The same procedure was used only on FC during the video recording session.

Data acquisition

A Phillips Achieva 3.0 T MRI scanner was used for image acquisition. We used a T2*-weighted echo-planar sequence with 32 interleaved 3.5 mm thick axial slices and a 0.35 mm gap for functional imaging (TR = 1700 ms, TE = 27.6 ms, flip angle = 73°, FOV = 240 mm × 240 mm, 80 × 80 matrix of 3.5 mm isotropic voxels). At the end of the functional scanning, a T1-weighted anatomical image (1 × 1 × 1 mm) covering the whole brain, was acquired.

Image pre-processing

FMRI data was pre-processed using SPM8 (www.fil.ion.ucl.ac.uk). All echo planar images (EPIs) were slice-time corrected and realigned to the participant's mean EPI. T1 images were then co-registered to the mean EPI, segmented, and the gray matter was used to estimate the normalization parameters which were then applied to all EPIs. Normalized (2 × 2 × 2) EPIs were smoothed with an 8 mm isotropic FWHM Gaussian kernel.

General linear models

Two separate general linear models (GLM) were applied at the single subject level, one for the four runs of the responsibility task and one for the Pain-localizer. Predictors were modeled using a standard boxcar function convolved with the hemodynamic response function (HRF). For each of the four runs of the responsibility task, we included the following predictors. First, because each run started with the participant indicating their readiness with a button press, one predictor collected this initial press. The predictor was aligned with the presentation of the initial screen and lasted until the button press. Another predictor contained all the Flanker-epochs from the appearance of the string until the end of the participants' button press, independently of performance. Four separate predictors, one for each experimental condition (NoPain, FullResp, SharedResp, NoResp) were then used for the Feedback-epoch. These Feedback-epoch predictors were aligned to the presentation of the feedback screen and lasted for a random duration (chosen by the presentation program) between 2.5 and 6 s. Four predictors finally captured the Video-epoch separately for each experimental condition. Each video-predictor lasted for 1.5 s, corresponding to the videos' actual duration. For the Pain-localizer session, we modeled one predictor, lasting 0.5 s, for all 32 electrical stimulations with a parametric modulator for the subjective rating of pain intensity. A second predictor contained the rating period, from onset of the rating screen until the end of the button presses. Six additional predictors of no interest, resulting from the realignment procedure, were entered for each of the five runs to account for translations and rotations of the head (none of the included participants had head motions parameters exceeding the acquired voxel-size). Data was then analyzed at the second level using a within-subject repeated measurement ANOVA with 8 variables, and computing pairwise comparisons between the conditions of interests using directed planned comparisons (so called t contrasts). The ANOVA included the parameter estimates of the four conditions during the Video-epoch (VideoNoPain, VideoFullResp, VideoSharedResp, VideoNoResp), and the four conditions during the Feedback-epoch (FeedbackNoPain, FeedbackFullResp, FeedbackSharedResp, FeedbackNoResp). Results were thresholded at p < 0.001 (uncorrected) with a minimum cluster size of 10. All results presented also survived q < 0.05 (false discovery rate). We decided to control the false discovery rate at the voxel level rather than the family-wise error rate, because fdr is (a) a valid form of controlling the multiple comparison problem (Benjamini, 2010; Genovese et al., 2002), (b) leads to thresholds that are close to what has been found to provide optimal reproducibility (Thirion et al., 2007), and provides a better compromise between Type I and Type II error than the much more conservative family-wise error correction (Lieberman and Cunningham, 2009). In addition, we used voxel-wise fdr rather than topographical fdr (Chumbley and Friston, 2009), because we look for overlap at the voxel-level between activations during the responsibility task and a pain localizer. Although topographical fdr has some advantages (Chumbley and Friston, 2009), voxel-wise interpretations are problematic in the topographical approach, as the significance of a cluster does not automatically imply the significance of individual voxels within that cluster.

Results

Behavioral results

Average performance at the Flanker-task was 62.8% correct. We had at least a total of 17 repetitions of each of the four conditions in all participants. After telling the participants the truth about the experimental design, they were asked: “Do you think the experimental setup was realistic enough to believe it (1 = strongly disagree 7 = strongly agree)?” in a feedback questionnaire. The average rating was 6.2 ± 0.7 (s.d.) and none of the 30 included participants even somewhat disagreed with the statement, demonstrating the credibility of our design. Two of the initial 33 participants had voiced doubts about the experiments before debriefing, and were excluded from the analysis. They were the only participants that selected “somewhat disagree”. Our experimental design was motivated by the assumption that participants would perceive varying degrees of responsibility based on who erred, with full-responsibility > shared > no-responsibility. However, we had not directly asked the participants if that were true. Accordingly, we tried to contact all participants again to ask them “Please rate how responsible you felt for the pain of the other in each condition, on a scale from 1 = not responsible at all to 9 = extremely responsible”. Only eighteen of the participants could be contacted (the other students had changed their email address and phone number since). These 18 reported having perceived more responsibility for the pain in the FullResp (mean ± s.e., 7.5 ± 0.06) condition, less in the SharedResp condition (3.5 ± 0.06) and close to none in the NoResp condition (1.3 ± 0.02), with all pair-wise differences significant (t-test, p < 0.001). Given that 8 months lapsed on average between the experiment and these reports, these numbers should be interpreted with care, but they indicates that the participants' recollection of the experiment is in line with our aim to manipulate perceived responsibility and are compatible with the names given to the different conditions.

The effect of responsibility on brain activation

To identify the areas that respond to the display of FC's pain more in the condition in which the participant is the main responsible for FC's getting the electroshock compared to the case in which FC is causing the electroshock herself, we used the contrast VideoFullResp–VideoNoResp. This revealed stronger activations in conditions in which the witnessed pain was entirely due to the participant's mistake than those that were entirely due to author FC's mistake. Regions showing this difference (p < 0.001, k > 10, T > 3.13; also q < 0.05) included the middle temporal gyrus around the superior temporal sulcus, the inferior frontal gyrus, ACC, AI, amygdala, striatum and right superior frontal gyrus (see Inline Supplementary Table S1; and Fig. 1c, purple). To identify the areas that respond to the display of FC's pain more in the condition in which the participant is the main responsible for FC's getting the electroshock compared to the case in which FC is causing the electroshock herself, we used the contrast VideoFullResp–VideoNoResp. This revealed stronger activations in conditions in which the witnessed pain was entirely due to the participant's mistake than those that were entirely due to author FC's mistake. Regions showing this difference (p < 0.001, k > 10, T > 3.13; also q < 0.05) included the middle temporal gyrus around the superior temporal sulcus, the inferior frontal gyrus, ACC, AI, amygdala, striatum and right superior frontal gyrus (see Inline Supplementary Table S1; and Fig. 1c, purple). VideoFullResp–VideoNoResp (p < 0.001, k > 10, survived q < 0.05). Inline Supplementary Table S1 can be found online at http://dx.doi.org/10.1016/j.neuroimage.2015.03.034. The reverse contrast revealed no activations (at q < 0.05). Note that the contrasts (VideoFullResp–VideoNoPain)–(VideoNoResp–VideoNoPain) that would isolate the part of empathy triggered by the sight of pain are mathematically identical to the contrast we report above (VideoFullResp–VideoNoResp) because VideoNoPain is canceled out. A similar logic applies to the following contrasts. To explore whether sharing the responsibility for FC getting the electroshock would suffice to reduce the response of the participant while witnessing FC's display of pain, we contrasted VideoFullResp–VideoSharedResp. This revealed a similar circuit (See Inline Supplementary Table S2; Fig. 1d, and green in Fig. 1c and e) that overlapped (yellow in Fig. 1c) with VideoFullResp–VideoNoResp, and included again the ACC, AI, amygdala, striatum, higher level visual areas of the temporal lobe and more cognitive regions including temporal pole and the superior frontal gyrus (p < 0.001, k > 10, T > 3.42; also q < 0.05). Again, the reverse contrast revealed no activations (at q < 0.05). Because the contrast VideoFullResp–VideoSharedResp only includes trials in which the participant received the same negative feedback about his/her performance differences in activation cannot be related to the participant's self-error-monitoring, which is known to activate regions similar to those of pain experience (Carter et al., 1998; Taylor et al., 2007; Ullsperger and von Cramon, 2003). Finally, to explore if there was a further decrease in the response to seeing the painful videos if the participant had no rather than shared responsibility, we computed the VideoSharedResp–VideoNoResp contrast, but this revealed no significant differences at q < 0.05, nor did the reverse contrast, in line with the similarity between the contrasts of these respective conditions and VideoFullResp. Reducing the threshold to p < 0.005 (k > 10) revealed bilateral amygdala and hippocampus, left prefrotal, right inferior temporal gyrus and left middle temporal gyrus. The reverse contrast revealed no activations (at q < 0.05). Note that the contrasts (VideoFullResp–VideoNoPain)–(VideoNoResp–VideoNoPain) that would isolate the part of empathy triggered by the sight of pain are mathematically identical to the contrast we report above (VideoFullResp–VideoNoResp) because VideoNoPain is canceled out. A similar logic applies to the following contrasts. To explore whether sharing the responsibility for FC getting the electroshock would suffice to reduce the response of the participant while witnessing FC's display of pain, we contrasted VideoFullResp–VideoSharedResp. This revealed a similar circuit (See Inline Supplementary Table S2; Fig. 1d, and green in Fig. 1c and e) that overlapped (yellow in Fig. 1c) with VideoFullResp–VideoNoResp, and included again the ACC, AI, amygdala, striatum, higher level visual areas of the temporal lobe and more cognitive regions including temporal pole and the superior frontal gyrus (p < 0.001, k > 10, T > 3.42; also q < 0.05). Again, the reverse contrast revealed no activations (at q < 0.05). Because the contrast VideoFullResp–VideoSharedResp only includes trials in which the participant received the same negative feedback about his/her performance differences in activation cannot be related to the participant's self-error-monitoring, which is known to activate regions similar to those of pain experience (Carter et al., 1998; Taylor et al., 2007; Ullsperger and von Cramon, 2003). Finally, to explore if there was a further decrease in the response to seeing the painful videos if the participant had no rather than shared responsibility, we computed the VideoSharedResp–VideoNoResp contrast, but this revealed no significant differences at q < 0.05, nor did the reverse contrast, in line with the similarity between the contrasts of these respective conditions and VideoFullResp. Reducing the threshold to p < 0.005 (k > 10) revealed bilateral amygdala and hippocampus, left prefrotal, right inferior temporal gyrus and left middle temporal gyrus. VideoFullResp–VideoSharedResp (p < 0.001, k > 10, survived q < 0.05). Inline Supplementary Table S2 can be found online at http://dx.doi.org/10.1016/j.neuroimage.2015.03.034. Consistent with the literature (Garcia-Larrea and Peyron, 2013; Melzack and Wall, 1965; Mouraux et al., 2011), our Pain-localizer revealed areas associated with the “pain-matrix”, including the bilateral cingulate cortex, bilateral insula, sensorimotor strip (Brodmann Area, BA, 2, 3b, 4a), striatum, premotor cortex (BA6, and inferior frontal gyrus), inferior parietal cortex, and cerebellum (all p < 0.001, k > 10, T > 3.42; also q < 0.05, See Inline Supplementary Table S3; and red in Figs. 1e and g). Consistent with the literature (Garcia-Larrea and Peyron, 2013; Melzack and Wall, 1965; Mouraux et al., 2011), our Pain-localizer revealed areas associated with the “pain-matrix”, including the bilateral cingulate cortex, bilateral insula, sensorimotor strip (Brodmann Area, BA, 2, 3b, 4a), striatum, premotor cortex (BA6, and inferior frontal gyrus), inferior parietal cortex, and cerebellum (all p < 0.001, k > 10, T > 3.42; also q < 0.05, See Inline Supplementary Table S3; and red in Figs. 1e and g). Pain-localizer ( The first part of the table lists the clusters and local maxima (peaks) of the pain localizer. Because the first cluster encompasses a large number of brain region, we detail the cytoarchitectonic brain regions it encompasses using the Anatomy Toolbox for SPM in the second half of the table. Inline Supplementary Table S3 can be found online at http://dx.doi.org/10.1016/j.neuroimage.2015.03.034. Because an overlap between self- and other-emotions is considered a defining feature of the neural proxy of empathy in the literature (Gazzola and Keysers, 2009; Keysers et al., 2004; Singer et al., 2004; Wicker et al., 2003), to explore which of the regions with BOLD signals modulated by responsibility during the Video-epoch might be interpreted as a proxy of empathy, we inclusively masked the activated areas resulting from the contrast VideoFullResp–VideoSharedResp, with those from the Pain-localizer. We found a set of regions that overlapped with pain experience (yellow in Fig. 1e), and a set that did not. The former (Table 2; Fig. 1e, yellow) includes the ACC, right AI, bilateral putamen, left amygdala, right and left inferior frontal gyrus and SMA. These regions are supposed to be of particular relevance when it comes to empathy (Fan et al., 2011; Lamm et al., 2011; Singer et al., 2004). A number of clusters however clearly fell outside of the pain localizer, including high-level visual regions of the temporal lobe around the STS and the dorsolateral prefrontal cortex (See Inline Supplementary Table S4; and Fig. 1e, green).
Table 2

Activation table of the overlap between VideoFullResp–VideoSharedResp and the Pain-localizer. Both responsibility effect and Pain-localizer were individually thresholded at p < 0.001, k > 10, and survived q < 0.05. The T values shown in this table are from VideoFullResp–VideoSharedResp.

#Cluster sizeMIN coordinates (mm) X, Y, ZT-valuesHemAnatomical description
1383436244.73RAnterior cingulate cortex
− 828204.5LAnterior cingulate cortex
2367188− 65.04RPutamen
388− 24.5RInsula lobe
3016− 123.99RInsula lobe
2810− 143.98ROlfactory cortex
20− 4− 143.73RHippocampus
3142− 24− 4− 144.37LAmygdala
− 186− 83.74LPutamen
4765420− 43.89RInferior frontal gyrus (p. Orbitalis)
582023.61RInferior frontal gyrus (p. Triangularis)
525− 3046203.74LMiddle frontal gyrus
620− 402− 203.83LTemporal pole
7163052223.53RMiddle frontal gyrus
816− 5020− 63.41LInferior frontal gyrus (p. Orbitalis)
9121424583.46RSMA
10123242223.47RMiddle frontal gyrus
1110− 24− 4124.09LPutamen
Because an overlap between self- and other-emotions is considered a defining feature of the neural proxy of empathy in the literature (Gazzola and Keysers, 2009; Keysers et al., 2004; Singer et al., 2004; Wicker et al., 2003), to explore which of the regions with BOLD signals modulated by responsibility during the Video-epoch might be interpreted as a proxy of empathy, we inclusively masked the activated areas resulting from the contrast VideoFullResp–VideoSharedResp, with those from the Pain-localizer. We found a set of regions that overlapped with pain experience (yellow in Fig. 1e), and a set that did not. The former (Table 2; Fig. 1e, yellow) includes the ACC, right AI, bilateral putamen, left amygdala, right and left inferior frontal gyrus and SMA. These regions are supposed to be of particular relevance when it comes to empathy (Fan et al., 2011; Lamm et al., 2011; Singer et al., 2004). A number of clusters however clearly fell outside of the pain localizer, including high-level visual regions of the temporal lobe around the STS and the dorsolateral prefrontal cortex (See Inline Supplementary Table S4; and Fig. 1e, green). VideoFullResp–VideoSharedResp not overlapping with Pain-localizer (p < 0.001, k > 10, survived q < 0.05). Both the responsibility effect and Pain-localizer were individually thresholded at p < 0.001, k > 10, and survived q < 0.05. Only voxels significant in VideoFullResp–VideoSharedResp but not in the pain localizer are listed. Inline Supplementary S4 can be found online at http://dx.doi.org/10.1016/j.neuroimage.2015.03.034. The FullResp and SharedResp trials not only differ in the degree of responsibility perceived by the participant, but also based on the performance of the confederate FC, which is correct in FullResp but incorrectly in SharedResp. Because it has been shown that the errors and successes of others can vicariously activate regions encoding errors and successes in the self (Heldmann et al., 2008; Mathalon et al., 2003; Monfardini et al., 2013; Shane et al., 2008), the greater activity in those areas resulting from the VideoFullResp–VideoSharedResp contrast could reflect a spill-over from the vicarious processing of the success of another triggered in the Feedback-epoch. Because in our design the Feedback-epoch informing the participants about their errors was separate in time from the Video-epoch triggering the processing of the facial expressions, we would expect the activations associated with error monitoring for the contrast FullResp–SharedResp to be greater during the Feedback-epoch than the Video-epoch. If VideoFullResp–VideoSharedResp however reflects a modulation of facial expression processing by responsibility, we would expect the difference to be larger during the Video-epoch when facial expressions are shown than during the Feedback-epoch when errors are revealed. An interaction analysis (VideoFullResp–VideoSharedResp) > (FeedbackFullResp–FeedbackSharedResp) confirms that the AI and ACC showed a larger modulation during the Video-epoch compared to the Feedback-epoch (Fig. 1f; and Inline Supplementary Table S5), and that this overlaps with the Pain-localizer. The inverse contrasts did not show any significant activations (at q < 0.05). This result suggests a spillover from a vicarious error processing to be insufficient to explain the difference during the Video-epoch. Note that this interaction analysis will pitch brain activity time-locked to the presentation of the feedback screen against that time-locked to the presentation of the video. In theory, participants could have processes errors at other points in time as well. Such ‘free floating’ error related activity, will either go into error (and thus play against the interaction analysis) or into the response to both the feedback and video stimuli (as it is not time locked to either), and hence also not show up in the interaction (but as a main-effect). Calculating the contrast FeedbackFullResp–FeedbackSharedResp, within the whole brain or within the Pain-localizer failed to yield any significant activation (at q < 0.05), reinforcing that a spill-over from the Feedback phase is unlikely. The FullResp and SharedResp trials not only differ in the degree of responsibility perceived by the participant, but also based on the performance of the confederate FC, which is correct in FullResp but incorrectly in SharedResp. Because it has been shown that the errors and successes of others can vicariously activate regions encoding errors and successes in the self (Heldmann et al., 2008; Mathalon et al., 2003; Monfardini et al., 2013; Shane et al., 2008), the greater activity in those areas resulting from the VideoFullResp–VideoSharedResp contrast could reflect a spill-over from the vicarious processing of the success of another triggered in the Feedback-epoch. Because in our design the Feedback-epoch informing the participants about their errors was separate in time from the Video-epoch triggering the processing of the facial expressions, we would expect the activations associated with error monitoring for the contrast FullResp–SharedResp to be greater during the Feedback-epoch than the Video-epoch. If VideoFullResp–VideoSharedResp however reflects a modulation of facial expression processing by responsibility, we would expect the difference to be larger during the Video-epoch when facial expressions are shown than during the Feedback-epoch when errors are revealed. An interaction analysis (VideoFullResp–VideoSharedResp) > (FeedbackFullResp–FeedbackSharedResp) confirms that the AI and ACC showed a larger modulation during the Video-epoch compared to the Feedback-epoch (Fig. 1f; and Inline Supplementary Table S5), and that this overlaps with the Pain-localizer. The inverse contrasts did not show any significant activations (at q < 0.05). This result suggests a spillover from a vicarious error processing to be insufficient to explain the difference during the Video-epoch. Note that this interaction analysis will pitch brain activity time-locked to the presentation of the feedback screen against that time-locked to the presentation of the video. In theory, participants could have processes errors at other points in time as well. Such ‘free floating’ error related activity, will either go into error (and thus play against the interaction analysis) or into the response to both the feedback and video stimuli (as it is not time locked to either), and hence also not show up in the interaction (but as a main-effect). Calculating the contrast FeedbackFullResp–FeedbackSharedResp, within the whole brain or within the Pain-localizer failed to yield any significant activation (at q < 0.05), reinforcing that a spill-over from the Feedback phase is unlikely. Activation Table of (VideoFullResp–VideoSharedResp)–(FeedbackFullResp–FeedbackSharedResp) (p < 0.001, k > 10, survived q < 0.05). Inline Supplementary Table S5 can be found online at http://dx.doi.org/10.1016/j.neuroimage.2015.03.034. The majority of empathy for pain studies so far contrasted stimuli depicting another person in pain against stimuli representing the same person going through a non-painful experience — independently of responsibility. To explore whether we can reproduce the existing results with our data-set, we performed a contrast between all videos illustrating pain and those illustrating no pain (i.e. VideoFullResp + VideoSharedResp + VideoNoResp–3 × VideoNoPain; blue + yellow in Fig. 1g) and overlapped (yellow and Table S6) this contrast with the Pain-localizer (red + yellow). Results evidenced a network of brain regions (yellow in Fig. 1g, and Inline Supplementary Table S6) recruited in both the observation of the displays of pain of others and the subjective experience of pain (independently of the responsibility the observer has for causing the other person's pain), consistent with that found in the literature, including the AI, ACC and amygdala (Lamm et al., 2011). We finally extracted the mean signal time course in clusters common to VideoFullResp–VideoSharedResp and the Pain-localizer (see Table 2). To explore how the responsibility-dependent brain activation reported above develops in time, compares to baseline and to the level of activation in first person pain experience, we ran two additional analyses. For the first analysis, we extracted, for each of the 11 clusters listed in Table 2 (see also Fig. 1e), the mean signal time course from the normalized functional images of each participant. Using Marsbar (http://marsbar.sourceforge.net/, Brett et al., 2002), we then calculated parameter estimates for all four video conditions (NoPain, NoResp, SharedResp and FullResp) against baseline. Fig. 1h shows these parameter estimates averaged across participants and illustrates that the activation was always numerically lowest in the condition in which no pain was witnessed, and clearly above baseline for the VideoFullResp condition in most of the ROIs, except in ROI8 located in the temporal pole. We additionally calculated parameter estimates for the pain localizer, and plotted the sum of the parameter common to all shocks and the parametric modulator, to represent the activation level to the noxious shocks. This revealed that in most of the ROIs, except ROI8 and 11, the first-person experience of a noxious shock triggered activity that is roughly commensurable to witnessing another experience a shock under full responsibility (Fig. 1h). We did not perform a statistical analysis of these parameter estimates because the clusters were selected based on responses during the Video epoch and the Pain-Localizer, biasing statistical comparisons. For the second analysis, we extracted the time course (peri-stimulus time histogram, PSTH; rfxplot toolbox for SPM, http://rfxplot.sourceforge.net/index.html, Glascher, 2009) for each video condition and the Localizer (for high and low intensity stimuli separately), to explore whether the BOLD signal in response to painful facial expressions has a time course similar to that during the first-person experience of pain (see Inline Supplementary Fig. S1). The analysis illustrates the similarity of time courses of the video and localizer responses. The majority of empathy for pain studies so far contrasted stimuli depicting another person in pain against stimuli representing the same person going through a non-painful experience — independently of responsibility. To explore whether we can reproduce the existing results with our data-set, we performed a contrast between all videos illustrating pain and those illustrating no pain (i.e. VideoFullResp + VideoSharedResp + VideoNoResp–3 × VideoNoPain; blue + yellow in Fig. 1g) and overlapped (yellow and Table S6) this contrast with the Pain-localizer (red + yellow). Results evidenced a network of brain regions (yellow in Fig. 1g, and Inline Supplementary Table S6) recruited in both the observation of the displays of pain of others and the subjective experience of pain (independently of the responsibility the observer has for causing the other person's pain), consistent with that found in the literature, including the AI, ACC and amygdala (Lamm et al., 2011). We finally extracted the mean signal time course in clusters common to VideoFullResp–VideoSharedResp and the Pain-localizer (see Table 2). To explore how the responsibility-dependent brain activation reported above develops in time, compares to baseline and to the level of activation in first person pain experience, we ran two additional analyses. For the first analysis, we extracted, for each of the 11 clusters listed in Table 2 (see also Fig. 1e), the mean signal time course from the normalized functional images of each participant. Using Marsbar (http://marsbar.sourceforge.net/, Brett et al., 2002), we then calculated parameter estimates for all four video conditions (NoPain, NoResp, SharedResp and FullResp) against baseline. Fig. 1h shows these parameter estimates averaged across participants and illustrates that the activation was always numerically lowest in the condition in which no pain was witnessed, and clearly above baseline for the VideoFullResp condition in most of the ROIs, except in ROI8 located in the temporal pole. We additionally calculated parameter estimates for the pain localizer, and plotted the sum of the parameter common to all shocks and the parametric modulator, to represent the activation level to the noxious shocks. This revealed that in most of the ROIs, except ROI8 and 11, the first-person experience of a noxious shock triggered activity that is roughly commensurable to witnessing another experience a shock under full responsibility (Fig. 1h). We did not perform a statistical analysis of these parameter estimates because the clusters were selected based on responses during the Video epoch and the Pain-Localizer, biasing statistical comparisons. For the second analysis, we extracted the time course (peri-stimulus time histogram, PSTH; rfxplot toolbox for SPM, http://rfxplot.sourceforge.net/index.html, Glascher, 2009) for each video condition and the Localizer (for high and low intensity stimuli separately), to explore whether the BOLD signal in response to painful facial expressions has a time course similar to that during the first-person experience of pain (see Inline Supplementary Fig. S1). The analysis illustrates the similarity of time courses of the video and localizer responses. (VideoFullResp+VideoSharedResp+VideoNoResp)–3VideoNoPain overlapped with Pain-localizer (p < 0.001, k > 10, survived q < 0.05). This table represents an approximation of traditional definitions of the neural basis for empathy: it identifies voxels for which the activation during the observation of painful shocks (independently of responsibility) exceeds that during the observation of non-painful shocks within regions involved in the experience of painfulness (Pain-localizer). Both contrasts are calculated at (p < 0.001, k > 10, survived q < 0.05), and only voxels common to both are listed. T-values from the contrast (VideoFullResp+VideoSharedResp+VideoNoResp)–3VideoNoPain. Peri-stimulus time histogram for each Video condition and the Localizer. Evoked responses are plotted from − 2 s from stimulus onset to + 10 s after stimulus onset, from each of the 11 clusters listed in Table 2, four all four Video conditions (VideoFullResp, VideoSharedResp, VideoNoResp — in warm colors — ,and VideoNoPain — in blue) and the Localizer. For this analysis, three predictors were used to model the Localizer data: one grouping the innoxious (light green), one the noxious stimulations (dark green), and one the motor response. Thick lines represent the average evoked response, long dashed lines represent the error. Evoked responses were computed using the PSTH function of the rfxplot toolbox for SPM (http://rfxplot.sourceforge.net/index.html, Glascher, 2009), which implements a modified and improved Finite Impulse Response approach that yields a mean event response even when the events overlap in time. The data are adjusted for removing the effect of other regressors present in the modelization of the data at the first level. Inline Supplementary Table S6 and Fig. S1 can be found online at http://dx.doi.org/10.1016/j.neuroimage.2015.03.034.

Discussion

In contrast to most studies investigating neural responses to the pain of others, in which the participant witnesses a pain he did not cause (Lamm et al., 2011), we investigate how the brain reacts to facial expressions of the pain of others as a function of the participant's responsibility. We find the brain response to witnessing facial expressions of pain to be increased when the observer had full responsibility for the pain inflicted to the other. Shared or no responsibility lead to relatively reduced activations. The brain regions modulated by responsibility include the AI, ACC, the putamen and amygdala. Part of the BOLD activity of these regions additionally correlated with stimulus painfulness ratings during the participant's first-hand experience of pain (pain localizer). Our results confirm the existing literature on empathy, by showing that the AI and ACC, the two most consistent regions across studies of empathy for pain (Lamm et al., 2011) are indeed activated both while experiencing pain and witnessing the pain of others (VideoFullResp + VideoSharedResp + VideoNoResp–3 × VideoNoPain). However, our findings extend the literature by showing how activations to the pain of others are modulated by the observer's responsibility. That having no responsibility for the pain reduces vicarious activations dovetails with a recent study (Koban et al., 2013) showing that the AI and dorsolateral prefrontal cortex differentiate between noxious and innoxious stimulation to another individual when the participant caused the pain by erring, but not when the other individual had caused his own pain. Because in Koban et al's study a task was performed by the witness in the full- but not in the no-responsibility condition, it was not possible to determine an effect of sharing responsibility. In addition, the modulation ascribed to agency, could have been due to differences in attention because participants could disengage attention during the no-responsibility trials in which they were not required to perform a task. By having our participant always do the task together with the pain-taker, and including a pain localizer, we were able to extend their results and (a) demonstrate a strong effect of sharing responsibility and (b) make the attention explanation unlikely. In addition, by showing that some voxels modulated by responsibility (AI, ACC, putamen and amygdala) fall within our participant's pain localizer while others (STS/MTG and dorsolateral prefrontal cortex) do not, we help decompose the neural effects of responsibility, in some that are more likely to relate to negative affective reactions that are common to pain observation and experience (AI, ACC, putamen and amygdala), and some that are more likely to reflect other, less affective processes (STS/MTG, dlPFC). With regard to those visual activations falling within the pain-localizer, the dominant interpretation in the literature has been to interpret them as evidence for empathy (Lamm et al., 2011). The rationale is that these voxels have BOLD activations predicting pain unpleasantness during shock experience, and they are activated while witnessing the pain of another individual, therefore their activation in the latter case causes the negative affective response to pain to be felt on behalf of the person that is seen to suffer. If interpreted in this way, what our experiment suggests is that the observers felt more vicarious pain when they witnessed a pain they caused. However, it is well known, that other negative affective responses also recruit similar voxels in the brain. In particular the experience and observation of disgust (Wicker et al., 2003) and the experience of guilt (Jankowski and Takahashi, 2014) also activates the AI, ACC and amygdala, and so do other salient stimuli (Legrain et al., 2011). Accordingly, some argue that activations in the ‘pain-matrix’ should not so much be viewed as unequivocal evidence for pain, but rather as evidence that an event triggered a complex reaction involving saliency detection, deployment of attention and prioritization of protective actions that also occurs following a noxious stimulus (Legrain et al., 2011). This complex reaction is of course closely related to what it feels like to be in pain, but is not specifically linked to noxious stimuli. Hence, the boosts in activity associated with full responsibility, that we observe in voxels of the observer's brain involved in the first and third person pain perception, may reflect a mix of attentional and motor readiness states that are common to different negative emotions, including vicarious pain and guilt. This increase in activity may therefore be conservatively interpreted as reflecting a boost in the attentional and affective reaction to the pain of others, rather than as evidence for a specific boost in vicarious pain. Further studies will be needed to explore if the exact function to be associated with the vicarious activation of these voxels can be further decomposed, for instance, by mapping the activation pattern of these voxels with a larger spectrum of emotions (pain, guilt, salience etc.) to then explore using multi-voxel pattern classification, which vicarious patterns can be specifically associated with one negative form of affect, and which reflect more generic aspects (Wagner et al., 1998). A further limitation to be kept in mind is that in our design the responsibility of the witness is inversely proportional to that of the victim: cases of full-responsibility for the observer were cases of full innocence of the victim. It might thus be that stronger brain activity in the Full-Responsibility situation was due to the observer's perceived responsibility, or to the fact that the observer perceived that an innocent victim was experiencing pain. We cannot dissociate these alternatives. Experiments with two confederates (a passive victim and a second player) would help overcome this limitation. What functional implications might such a responsibility modulation of pain-matrix activity during pain observation have? Neural activity in the pain-matrix following noxious stimuli on our own body arguably serves to motivate us to stay out of harm's way in the future. By extension, activation in this matrix while witnessing the pain of others could motivate us to keep others out of harm's way. This vicarious activation, as a learning signal, should then be maximal if the observer caused the pain, because this is the condition with maximal causal relationship to the observer's own behavior. That pain-matrix activation reflects responsibility could thus serve as a mechanism that optimizes learning in a social context. Merely sharing responsibility was sufficient to decrease pain-matrix activity, and being the only responsible triggered the largest brain activity. This indirectly also shed light on the bystander effect (Darley and Latane, 1968), and advocates that if we want people to process the pain of others, it might be essential to maximize brain activations by actively emphasizing each potential viewer's personal responsibility. The following are the supplementary data related to this article. Supplementary data to this article can be found online at http://dx.doi.org/10.1016/j.neuroimage.2015.03.034.
Table S1

VideoFullResp–VideoNoResp (p < 0.001, k > 10, survived q < 0.05).

Cluster size (voxels)Peak MIN coordinates (mm)Peak T-valuesHemPeak anatomical location
3573044125.08LAnterior cingulate cortex
1856304.86RSuperior frontal gyrus
− 1242464.58LSuperior frontal gyrus
836544.51RSuperior medial gyrus
− 2052264.48LMiddle frontal gyrus
2011− 26− 6− 146.30LAmygdala
− 522− 284.57LMiddle temporal gyrus
− 4430− 124.28LInferior frontal gyrus (p. Orbitalis)
− 4618− 144.23LTemporal pole
− 248− 84.15LPutamen
156522− 4− 145.55RAmygdala
2610− 185.34ROlfactory cortex
228− 44.76RPutamen
503204.51RInferior frontal gyrus (p. Triangularis)
480− 324.32RInferior temporal gyrus
4220− 284.16RTemporal pole
2124− 12323.80RMiddle cingulate cortex
− 2− 10343.76LMiddle cingulate cortex
1221610164.04RCaudate nucleus
12− 12123.69RThalamus
106− 58− 36263.86LSupraMarginal gyrus
3830− 2043.86RPutamen
34− 1843.64RInsula lobe
Table S2

VideoFullResp–VideoSharedResp (p < 0.001, k > 10, survived q < 0.05).

Cluster size (voxels)Peak MIN coordinates (mm)Peak T-valuesHemPeak anatomical location
28311856345.55RSuperior frontal gyrus
436244.73RAnterior cingulate cortex
1058164.62RSuperior medial gyrus
− 828204.5LAnterior cingulate cortex
− 852244.44LSuperior medial gyrus
− 2446184.43LMiddle frontal gyrus
511188− 65.04RPutamen
2810− 143.98ROlfactory cortex
20− 4− 143.73RHippocampus
2810− 243.56RParaHippocampal gyrus
241− 24− 8− 124.39LAmygdala
− 422− 203.86LTemporal pole
− 186− 83.75LPutamen
1934840− 43.96RInferior frontal gyrus (p. Orbitalis)
503223.76RInferior frontal gyrus (p. Triangularis)
4640− 83.7RInferior frontal gyrus (p. Orbitalis)
1561036544.3RSuperior medial gyrus
1424583.46RSMA
11438804.37RInsula lobe
3810− 64.06RInsula lobe
5420− 43.89RInferior frontal gyrus (p. Orbitalis)
582023.61RInferior frontal gyrus (p. Triangularis)
83582− 163.96RMedial temporal pole
54− 10− 123.55RSuperior temporal gyrus
69− 4042− 123.66LInferior frontal gyrus (p. Orbitalis)
49− 5024− 43.52LInferior frontal gyrus (p. Orbitalis)
− 5016− 103.29LTemporal pole
4262− 46263.53RSupraMarginal gyrus
354622− 284.08RTemporal pole
30− 4612− 363.94LMedial temporal pole
16− 26− 4124.28LPutamen
Table S3

Pain-localizer ( The first part of the table lists the clusters and local maxima (peaks) of the pain localizer. Because the first cluster encompasses a large number of brain region, we detail the cytoarchitectonic brain regions it encompasses using the Anatomy Toolbox for SPM in the second half of the table.

Cluster size (Voxels)Peak MIN coordinates (mm)Peak T-valuesHemPeak anatomical location
56,552*40− 2− 29.54RRight insula lobe
4− 6429.52RMiddle cingulate cortex
− 38− 18149.25LLeft insula lobe
8− 688.6RThalamus
− 4− 4368.59LMiddle cingulate cortex
− 186− 28.57LPallidum
− 6− 6528.48LSMA
218− 3436314.27LMiddle frontal gyrus
1613852184.93RMiddle frontal gyrus
*Details of Cluster 1



Number of Voxels% of Cluster% of Cluster activatedHemCytoarchitectonic area

3142.65.871.2LArea 6
2935.55.264.5RArea 6
996.81.842.2RArea 17
941.81.749.2RLobule VI (Hem)
861.91.568.1LArea 4a
736.61.336.6LLobule VI (Hem)
701.31.284.9RLobule V
679.31.273.3LArea 2
616.71.194.5LTh-prefrontal
583.4176.7LLobule V
568.1148.1RArea 4a
558191.6LOP 1
556.2130.6LSPL (7A)
538123.6LArea 17
532.50.991.8RTh-prefrontal
526.70.953.6RArea 2
489.90.993.3ROP 1
480.30.882.3LArea 4p
4670.871.2LArea 3b
458.40.848.7RArea 3b
443.10.846.5LArea 1
4150.775.7LArea 3a
386.20.723.4RArea 18
360.50.639.2RIPC (PF)
359.80.671.7LLobules I–IV (Hem)
3550.667.4RLobules I–IV (Hem)
351.10.668.7ROP 4
345.70.655.8LOP 4
343.10.657.7RTh-temporal
338.30.685.1LIPC (PFcm)
334.60.636.5RArea 44
330.70.688.9LSPL (5M)
318.90.691.8RTh-parietal
314.80.659LSPL (5L)
313.30.654.8LTh-temporal
302.70.567.4RSPL (5M)
276.40.556.7RArea 4p
269.80.582.1LTh-parietal
2470.475.3RIPC (PFcm)
246.10.448.6RArea 3a
244.80.414.5LArea 18
235.90.423.4LIPC (PF)
233.20.482.5LOP 3
222.30.484.2ROP 3
218.30.475LIPC (PFop)
208.50.474.9RIPC (PFop)
189.50.376.5RTE 1.0
186.40.328.2RSPL (5L)
183.80.321.4RArea 1
183.20.344.1LIPC (PFt)
182.30.389.4LTE 1.0
179.60.384.5RSPL (5Ci)
178.60.397.4LTE 1.1
176.90.332.8RHipp (SUB)
173.90.333LHipp (SUB)
170.80.3100LInsula (Ig2)
163.30.336.4RIPC (PFt)
149.10.399.7RInsula (Ig2)
142.80.390.8ROP 2
135.70.276.5RTE 1.1
127.80.298.8LSPL (5Ci)
125.40.299.6LTh-premotor
1230.266.6RAmyg (SF)
117.70.2100LOP 2
1090.215.7RhOC3v (V3v)
105.80.287.8LInsula (Id1)
103.70.296.3LTE 1.2
101.90.271.7RTh-premotor
101.50.299.4RTE 1.2
94.90.222.9RSPL (7PC)
90.30.219.7LTE 3
85.90.264.6RInsula (Id1)
83.80.195.2RTh-somatosensory
80.80.19.5RHipp (CA)
69.40.128.9RLobule VI (Vermis)
67.40.1100LInsula (Ig1)
65.80.18.2LHipp (CA)
65.20.130.2LLobule VI (Vermis)
59.70.14.9LArea 44
590.198.1RInsula (Ig1)
58.20.129.3LAmyg (SF)
57.80.129.8LSPL (7PC)
52.10.197.3LTh-motor
47.30.14.2RArea 45
47.10.18.4LSPL (7P)
440.113.7LAmyg (LB)
Table S4

VideoFullResp–VideoSharedResp not overlapping with Pain-localizer (p < 0.001, k > 10, survived q < 0.05). Both the responsibility effect and Pain-localizer were individually thresholded at p < 0.001, k > 10, and survived q < 0.05. Only voxels significant in VideoFullResp–VideoSharedResp but not in the pain localizer are listed.

Cluster size (voxels)Peak MIN coordinates (mm)Peak T-valuesHemPeak anatomical location
1934840− 43.96RInferior frontal gyrus (p. Orbitalis)
4836− 23.92RInferior frontal gyrus (p. Triangularis)
503223.76RInferior frontal gyrus (p. Triangularis)
1441036544.3RSuperior medial gyrus
1424603.22RSuperior frontal gyrus
84582− 163.96RMedial temporal pole
54− 10− 123.55RSuperior temporal gyrus
58− 6− 143.38RMiddle temporal gyrus
70− 4042− 123.66LInferior frontal gyrus (p. Orbitalis)
4362− 46263.53RSupraMarginal gyrus
62− 52263.47RAngular gyrus
36582283.4RInferior frontal gyrus (p. Triangularis)
354622− 284.08RTemporal Pole
5020− 283.86RMedial temporal pole
30− 4612− 363.94LMedial temporal pole
24420− 364.54RInferior temporal gyrus
428− 403.59RMedial temporal pole
131412163.69RCaudate nucleus
1348− 20− 63.52RSuperior temporal gyrus
11− 46− 2− 323.66LInferior temporal gyrus
Table S5

Activation Table of (VideoFullResp–VideoSharedResp)–(FeedbackFullResp–FeedbackSharedResp) (p < 0.001, k > 10, survived q < 0.05).

Cluster size (voxels)Peak MIN coordinates (mm)Peak T-valuesHemPeak anatomical location
823− 2246164.2LMiddle frontal gyrus
− 1432564.19LSuperior frontal gyrus
244163.78RAnterior cingulate cortex
044223.72RSuperior medial gyrus
7371454365.17RSuperior frontal gyrus
1036544.58RSuperior medial gyrus
2660223.99RMiddle frontal gyrus
251188− 64.77RPutamen
221− 4042− 124.52LInferior frontal gyrus (p. Orbitalis)
− 3254− 43.32LMiddle orbital gyrus
189− 26− 6− 144.39LAmygdala
− 186− 83.67LPutamen
− 202− 63.46LPallidum
169484203.86RInferior frontal gyrus (p. Triangularis)
57582− 163.84RMedial temporal pole
54− 10− 123.36RSuperior temporal gyrus
548− 163.31RTemporal pole
441616164.55RCaudate nucleus
37− 424− 203.85LTemporal pole
30818223.64RAnterior cingulate cortex
21388− 23.6RInsula
174622− 283.96RTemporal pole
5020− 283.79RMedial temporal pole
12− 1616123.61LCaudate nucleus
11− 12− 94− 23.35LCalcarine gyrus
10− 26− 4124.22LPutamen
10420− 364.11RInferior temporal gyrus
444− 343.43RMedial temporal pole
Table S6

(VideoFullResp+VideoSharedResp+VideoNoResp)–3VideoNoPain overlapped with Pain-localizer (p < 0.001, k > 10, survived q < 0.05). This table represents an approximation of traditional definitions of the neural basis for empathy: it identifies voxels for which the activation during the observation of painful shocks (independently of responsibility) exceeds that during the observation of non-painful shocks within regions involved in the experience of painfulness (Pain-localizer). Both contrasts are calculated at (p < 0.001, k > 10, survived q < 0.05), and only voxels common to both are listed. T-values from the contrast (VideoFullResp+VideoSharedResp+VideoNoResp)–3VideoNoPain.

Cluster sizePeak MIN coordinates (mm)Peak T-valuesHemPeak anatomical location
4592− 10− 7026.34LLingual gyrus
6− 6665.96RLingual gyrus
− 18− 62− 145.87LCerebellum
2897− 48− 38225.44LSuperior temporal gyrus
− 5016− 125.02LTemporal pole
− 5220− 64.92LInferior frontal gyrus (p. Orbitalis)
− 16− 2444.9LThalamus
− 168124.79LCaudate nucleus
− 32− 12164.78LInsula lobe
2634− 434285.53LAnterior cingulate cortex
− 4− 16404.89LMiddle cingulate cortex
1436264.6RAnterior cingulate cortex
− 20− 20624.3LPrecentral gyrus
81558− 32205.73RSuperior temporal gyrus
36− 32163.51RHeschls gyrus
56− 1843.43RSuperior temporal gyrus
516326104.85RPutamen
20263.94RPallidum
34− 4103.71RInsula Lobe
1845022− 104.1RInferior frontal gyrus (p. Orbitalis)
5214− 144.06RTemporal pole
17348− 6605.76RMiddle temporal gyrus
16326− 32603.57RPostcentral gyrus
20− 28683.46RPrecentral gyrus
128826624.35RSuperior medial gyrus
− 218624.32LSMA
− 1218543.67LSuperior frontal gyrus
− 828483.59LSuperior medial gyrus
214643.45RSMA
117− 44− 16344.78LPostcentral gyrus
102161483.83RCaudate nucleus
89− 2644365.22LMiddle frontal gyrus
81161483.83RCaudate nucleus
513054184.13RMiddle frontal gyrus
143016− 143.73RInsula lobe
11− 52− 8143.43LPostcentral gyrus
  46 in total

Review 1.  Brain mechanisms of pain affect and pain modulation.

Authors:  Pierre Rainville
Journal:  Curr Opin Neurobiol       Date:  2002-04       Impact factor: 6.627

2.  Racial bias reduces empathic sensorimotor resonance with other-race pain.

Authors:  Alessio Avenanti; Angela Sirigu; Salvatore M Aglioti
Journal:  Curr Biol       Date:  2010-05-27       Impact factor: 10.834

3.  Neural responses to ingroup and outgroup members' suffering predict individual differences in costly helping.

Authors:  Grit Hein; Giorgia Silani; Kerstin Preuschoff; C Daniel Batson; Tania Singer
Journal:  Neuron       Date:  2010-10-06       Impact factor: 17.173

Review 4.  Pain matrices and neuropathic pain matrices: a review.

Authors:  Luis Garcia-Larrea; Roland Peyron
Journal:  Pain       Date:  2013-09-08       Impact factor: 6.961

5.  Their pain is not our pain: brain and autonomic correlates of empathic resonance with the pain of same and different race individuals.

Authors:  Ruben T Azevedo; Emiliano Macaluso; Alessio Avenanti; Valerio Santangelo; Valentina Cazzato; Salvatore Maria Aglioti
Journal:  Hum Brain Mapp       Date:  2012-07-17       Impact factor: 5.038

Review 6.  Cognitive neuroscience of social emotions and implications for psychopathology: examining embarrassment, guilt, envy, and schadenfreude.

Authors:  Kathryn F Jankowski; Hidehiko Takahashi
Journal:  Psychiatry Clin Neurosci       Date:  2014-05       Impact factor: 5.188

7.  Love hurts: an fMRI study.

Authors:  Yawei Cheng; Chenyi Chen; Ching-Po Lin; Kun-Hsien Chou; Jean Decety
Journal:  Neuroimage       Date:  2010-02-24       Impact factor: 6.556

8.  A multisensory investigation of the functional significance of the "pain matrix".

Authors:  André Mouraux; Ana Diukova; Michael C Lee; Richard G Wise; Gian Domenico Iannetti
Journal:  Neuroimage       Date:  2010-10-12       Impact factor: 6.556

9.  Reduced spontaneous but relatively normal deliberate vicarious representations in psychopathy.

Authors:  Harma Meffert; Valeria Gazzola; Johan A den Boer; Arnold A J Bartels; Christian Keysers
Journal:  Brain       Date:  2013-08       Impact factor: 13.501

10.  Vicarious neural processing of outcomes during observational learning.

Authors:  Elisabetta Monfardini; Valeria Gazzola; Driss Boussaoud; Andrea Brovelli; Christian Keysers; Bruno Wicker
Journal:  PLoS One       Date:  2013-09-05       Impact factor: 3.240

View more
  12 in total

1.  When your error becomes my error: anterior insula activation in response to observed errors is modulated by agency.

Authors:  Emiel Cracco; Charlotte Desmet; Marcel Brass
Journal:  Soc Cogn Affect Neurosci       Date:  2015-09-23       Impact factor: 3.436

2.  When your friends make you cringe: social closeness modulates vicarious embarrassment-related neural activity.

Authors:  Laura Müller-Pinzler; Lena Rademacher; Frieder M Paulus; Sören Krach
Journal:  Soc Cogn Affect Neurosci       Date:  2015-10-29       Impact factor: 3.436

3.  The causal role of the somatosensory cortex in prosocial behaviour.

Authors:  Laila Blömer; Carolina Fernandes-Henriques; Anna Henschel; Balint Kalista Lammes; Tatjana Maskaljunas; Selene Gallo; Riccardo Paracampo; Laura Müller-Pinzler; Mario Carlo Severo; Judith Suttrup; Alessio Avenanti; Christian Keysers; Valeria Gazzola
Journal:  Elife       Date:  2018-05-08       Impact factor: 8.140

4.  Social comparison in the brain: A coordinate-based meta-analysis of functional brain imaging studies on the downward and upward comparisons.

Authors:  Yi Luo; Simon B Eickhoff; Sébastien Hétu; Chunliang Feng
Journal:  Hum Brain Mapp       Date:  2017-10-24       Impact factor: 5.038

5.  Commanding or Being a Simple Intermediary: How Does It Affect Moral Behavior and Related Brain Mechanisms?

Authors:  Emilie A Caspar; Kalliopi Ioumpa; Irene Arnaldo; Lorenzo Di Angelis; Valeria Gazzola; Christian Keysers
Journal:  eNeuro       Date:  2022-10-17

6.  The Delaware Pain Database: a set of painful expressions and corresponding norming data.

Authors:  Peter Mende-Siedlecki; Jennie Qu-Lee; Jingrun Lin; Alexis Drain; Azaadeh Goharzad
Journal:  Pain Rep       Date:  2020-10-21

7.  Bidirectional cingulate-dependent danger information transfer across rats.

Authors:  Yingying Han; Rune Bruls; Efe Soyman; Rajat Mani Thomas; Vasiliki Pentaraki; Naomi Jelinek; Mirjam Heinemans; Iege Bassez; Sam Verschooren; Illanah Pruis; Thijs Van Lierde; Nathaly Carrillo; Valeria Gazzola; Maria Carrillo; Christian Keysers
Journal:  PLoS Biol       Date:  2019-12-05       Impact factor: 8.029

8.  A meta-analysis of neuroimaging studies on pain empathy: investigating the role of visual information and observers' perspective.

Authors:  Josiane Jauniaux; Ali Khatibi; Pierre Rainville; Philip L Jackson
Journal:  Soc Cogn Affect Neurosci       Date:  2019-08-31       Impact factor: 3.436

9.  A Generalizable Multivariate Brain Pattern for Interpersonal Guilt.

Authors:  Hongbo Yu; Leonie Koban; Luke J Chang; Ullrich Wagner; Anjali Krishnan; Patrik Vuilleumier; Xiaolin Zhou; Tor D Wager
Journal:  Cereb Cortex       Date:  2020-05-18       Impact factor: 5.357

Review 10.  The Anatomy of Suffering: Understanding the Relationship between Nociceptive and Empathic Pain.

Authors:  Jamil Zaki; Tor D Wager; Tania Singer; Christian Keysers; Valeria Gazzola
Journal:  Trends Cogn Sci       Date:  2016-03-01       Impact factor: 20.229

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.