Literature DB >> 34941992

Localized task-invariant emotional valence encoding revealed by intracranial recordings.

Daniel S Weisholtz1, Gabriel Kreiman2, David A Silbersweig1, Emily Stern1,3, Brannon Cha4,1, Tracy Butler5.   

Abstract

The ability to distinguish between negative, positive and neutral valence is a key part of emotion perception. Emotional valence has conceptual meaning that supersedes any particular type of stimulus, although it is typically captured experimentally in association with particular tasks. We sought to identify neural encoding for task-invariant emotional valence. We evaluated whether high-gamma responses (HGRs) to visually displayed words conveying emotions could be used to decode emotional valence from HGRs to facial expressions. Intracranial electroencephalography was recorded from 14 individuals while they participated in two tasks, one involving reading words with positive, negative, and neutral valence, and the other involving viewing faces with positive, negative, and neutral facial expressions. Quadratic discriminant analysis was used to identify information in the HGR that differentiates the three emotion conditions. A classifier was trained on the emotional valence labels from one task and was cross-validated on data from the same task (within-task classifier) as well as the other task (between-task classifier). Emotional valence could be decoded in the left medial orbitofrontal cortex and middle temporal gyrus, both using within-task classifiers and between-task classifiers. These observations suggest the presence of task-independent emotional valence information in the signals from these regions.
© The Author(s) 2021. Published by Oxford University Press.

Entities:  

Keywords:  classifier; decoding; emotion; intracranial EEG; valence

Mesh:

Year:  2022        PMID: 34941992      PMCID: PMC9164208          DOI: 10.1093/scan/nsab134

Source DB:  PubMed          Journal:  Soc Cogn Affect Neurosci        ISSN: 1749-5016            Impact factor:   4.235


Introduction

The ability to distinguish between negative, positive and neutral valence is a key part of emotion perception. In fact, one can scarcely define an emotional quality that is not either positive or negative in valence, as valence is an intrinsic characteristic of emotional experience and expression. A stimulus connoting negative valence suggests something aversive, unpleasant or repellent. It may lead one to exhibit defensive or self-protective reactions, to avoid further exposure and/or to experience unpleasant feelings, while a positively valenced stimulus may have the opposite effect. Humans have the ability to rapidly perceive valence from a wide variety of unrelated types of stimuli via virtually any sensory modality from the very basic (e.g. a noxious somatosensory stimulus) to the complex (e.g. a beautiful work of art), even when there is no consciously experienced feeling in response to the stimulus. The central conjecture evaluated in this study is that all instances of positive emotion and all instances of negative emotion are alike at some level that can be distinguished by the nervous system. In other words, we assess whether the neural circuit representation of emotional valence can be abstracted away from the actual stimulus and task features used to define the emotion concept. A large body of research has been dedicated to identifying the neural mechanisms underlying emotion perception utilizing various methodologies. Invasive recordings from the human brain constitute a small proportion of this literature, but direct recordings from the human brain can overcome several limitations inherent in non-invasive technologies. Direct recording of neuronal activities allows for the measurement of brain responses with millisecond temporal resolution, millimeter spatial resolution and high signal-to-noise (SNR) ratio (Lachaux ). Invasive recordings can investigate deep brain areas not easily accessed with non-invasive electrophysiology. Intracranial electroencephalography (iEEG) studies of emotion perception have generally involved measuring event-related potentials or event-related spectral changes in response to emotionally laden and neutral stimuli (most commonly facial expressions, images of scenes, printed words or audio or video clips) and contrasting the responses between emotion conditions. A variety of limbic, paralimbic and frontal and temporal neocortical regions have been implicated (see Guillory and Bujarski (2014) for a review). Emotion processing often recruits brain regions engaged in perception or interpretation of the stimulus, most commonly regions involved in visual processing as most tasks utilize visual stimuli (Vuilleumier and Driver, 2007; Boucher ; Weisholtz ). The involvement of sensory regions suggests that the neural substrates of emotion perception or processing are, to some degree, task specific. Nevertheless, several limbic and multimodal cortical regions have been implicated in emotion perception across various types of tasks. From the iEEG literature alone, such regions have included the amygdala with tasks involving viewing emotional scenes (Oya ), emotional facial expressions (Krolak-Salmon ; Pourtois ,b; Sato ; Meletti ; Zheng ) or printed emotional words (Naccache ) and hearing vocal non-verbal emotional utterances (Dominguez-Borras ) or music (Omigie ); insula with tasks involving viewing emotional scenes (Brazdil ), facial expressions (Krolak-Salmon ) or printed emotional words (Ponz ); and orbitofrontal cortex in tasks involving viewing emotional facial expressions (Jung ) or emotion words (Ponz ) or listening to music (Omigie ). These studies have examined neural responses to stimuli within a particular task, leaving open the question of the degree to which the emotion-related findings are specific to the particular task or reflect task-invariant emotion coding. We sought to identify brain regions coding for emotional valence independent of processing domain by comparing within-subject neural responses to similar valence defined in distinct ways. We considered visually presented stimuli with negative, neutral and positive valence from two separate tasks with different types of stimulus sets conveying emotion in different ways—one language based and the other image based. We focused on the HGR as this frequency band has shown correspondence with neural activation with good spatial and temporal resolution (Crone ; Lachaux ). One approach to identify task-invariant neural responses is to examine between-task decoding accuracy in a machine learning setting (Piva ). We trained machine learning classifiers to use the HGR to discriminate between the three emotion valence conditions in each task separately and identified brain regions in which classifier performance was better than chance for both tasks individually. The tasks differed in both the manner in which emotion was conveyed (facial expression or words) and in the specific type of emotion conveyed. Negative faces depicted expressions of fear, and positive faces depicted expressions of happiness, while the negative and positive words depicted a range of emotions related to depressive and counter-depressive themes. The two tasks were alike only in their valence categories (positive, neutral and negative). To assess the degree of task invariance, we further assessed the degree of extrapolation when the classifiers were trained on one task and tested on the other. This technique identified brain regions in which high-gamma signals contain information about emotional valence independent of the specific emotion conveyed or the method by which it is conveyed (words vs faces).

Materials and methods

Participants

Patients with pharmacologically intractable epilepsy who were undergoing intracranial EEG monitoring at New York Presbyterian Hospital, NYC, and at Brigham and Women’s Hospital in Boston for seizure localization were recruited to participate after meeting the following inclusion criteria: they had capacity to consent, were fluent in English, were over 18 years old and were able to read. All protocols were approved by the IRB at each institution. The research was carried out in accordance with The Code of Ethics of the world Medical Association (Declaration of Helsinki) for experiments involving humans.

Tasks

Participants completed two similar tasks (one verbal and one non-verbal), involving the viewing of stimuli with positive, neutral and negative emotional valence that were presented on a laptop screen at the bedside. Stimuli were presented using either E-Prime (Psychology Software Tools, Inc.) or the Psychophysics Toolbox (Figure 1). The task was implemented in an identical way on each platform. Half of the participants completed the verbal task first and the other half completed the non-verbal task first. Most participants completed both tasks on the same day, but three participants completed them on consecutive days.
Fig. 1.

Diagram of tasks. Each subject completed both a word (WD) task and a face (FA) task. Each task consisted of positive, neutral and negative stimuli with 24 trials per condition. Stimuli were presented in block design with six stimuli per block, 2 s presentation time and a jittered ISI around 2.8 s. Blocks were presented in pseudo-random order.

Diagram of tasks. Each subject completed both a word (WD) task and a face (FA) task. Each task consisted of positive, neutral and negative stimuli with 24 trials per condition. Stimuli were presented in block design with six stimuli per block, 2 s presentation time and a jittered ISI around 2.8 s. Blocks were presented in pseudo-random order. HGRs from each of five electrodes in a particular brain region is entered as training data into a classifier along with condition labels for each trial. The trained classifier is then tested with data from three trials that were not included in the training set (one trial for each of the three conditions). This procedure is repeated until all trials have been tested, and the classifier performance is calculated as the percentage of trials correctly classified. In the word (WD) task, stimuli consisted of single words presented in a white font within a white box on an otherwise black background, centered on the screen and subtending about 5–6° of visual angle vertically and 12–15° horizontally. There were 24 positive, 24 neutral and 24 negative words, which were either adjectives, nouns or verbs, chosen to be relevant to depressive and counter-depressive themes based on clinical experience and rated for suitability by a panel of three experts. Words were balanced across the categories for length, frequency within the lexicon and part of speech, with the exception that, within the neutral list, verbs were substituted for adjectives, given that adjectives are typically not free of emotional valence. Negative words included words such as burden and guilty. Positive words included words such as praise and heroic. Neutral words included words such as clarinet and umbrella. Example face stimuli are shown in Figures 1 and 2. The WD task was utilized in a previously published fMRI study (Epstein ). In the face (FA) task, participants viewed images from the NimStim Set, an image bank of validated emotional facial expressions (Tottenham ). Images consisted of color photographs of naturally posed actors of different sex and ethnicity from the neck up on a blank background exhibiting facial expressions of fear (negative condition), happiness (positive condition) or a blank expression (neutral condition). Images were centered on the screen and subtended approximately 17–20° of visual angle vertically and 14–16° horizontally. There were 24 positive faces, 24 neutral faces and 24 negative faces. In each task, the stimuli were presented one at a time in valence-specific six-stimulus blocks. Each stimulus appeared on the screen for 2 s followed by an inter-stimulus interval (ISI) jittered around an average of 2.8 s (range = 1.8–3.8 s). The participant was instructed to press a button with the right index finger in response to each stimulus, irrespective of the content. Participants were given up to 2 s to respond to each stimulus, and reaction time data were collected. Each task was analyzed separately by fitting the reaction times to a generalized linear mixed-effects model with condition (positive, negative or neutral) as a fixed effect and subject as the random effect and the natural logarithm as a link function.

Electrophysiology data collection

Electrodes consisted of commercially available strips, grids and depth electrodes that were implanted in various locations based on clinical need. The number, type and location of the electrodes was not influenced by the research plan and was dictated strictly by clinical needs. iEEG was recorded using the XLTEK clinical EEG recording system (Natus Neuroworks) with a sampling rate of 500 Hz for most participants. One study was sampled at 250 Hz, one was sampled at 2000 Hz and four studies were sampled at 512 Hz. The stimulus presentation laptop sent a trigger pulse to the EEG headbox that was recorded along with the EEG signals and was used to identify the precise timing of the stimulus presentation within the recordings.

Electrode localization

The iELVis software toolbox (Groppe ) was utilized to identify the precise locations of the intracranial electrodes. The Desikan–Killiany atlas (Desikan ), as implemented in iELVis and FreeSurfer (http://surfer.nmr.mgh.harvard.edu/), was used to label the locations of the cortical electrodes based on anatomical parcellation of each individual brain. Depth electrodes in hippocampus and amygdala were labeled based on FreeSurfer’s volumetric brain segmentation (aparc + aseg.mgz).

Data analyses

Data analyses were carried out using MATLAB (Mathworks, Natick, MA). Electrodes were removed from the analyses if markedly corrupted by artifact, and line noise was removed by applying a series of notch filters at 60 Hz and harmonics. Each electrode was then re-referenced against the average signal. The high-gamma amplitude (HGA) was extracted by applying an 80–150 Hz band-pass filter on the re-referenced signals and then extracting the analytic signal from the Hilbert transform. Additional frequency bands were also tested and are described in Supplementary Methods and Supplementary Table S1. HGR was calculated by subtracting the mean of the 1-s pre-stimulus baseline from the 1500 ms HGA signal post-stimulus onset. HGR was then binned into three 500 ms time windows representing the mean HGR during the first 500 ms following stimulus onset (bin 1), 500–1000 ms following stimulus onset (bin 2) and 1000–1500 ms following stimulus onset (bin 3). Quadratic discriminant analysis was then used in order to identify information in the HGR that differentiates the three emotion conditions (Hung ; Meyers and Kreiman, 2012; Singer and Kreiman, 2012). A classifier (classify function in MATLAB Statistics and Machine Learning Toolbox) was trained separately for each brain region, task and time bin on the three different emotion conditions using a ‘leave one out’ cross-validation approach. To identify emotion-related information in the signal that is independent from the stimulus type, the classifier performance was also tested on the opposite task from which it was trained using a completely analogous procedure (we refer to this as ‘between-task’ classification, as opposed to ‘within-task’ classification when the classifier was tested on the same task on which it was trained). Separate classifiers were trained and tested for each brain region containing at least five electrodes, combined across subjects. To reduce the impact of the multiple comparisons problem given the large number of classifiers and because our interest was in identifying brain regions that exhibited task-independent emotion information, we focused specifically on regions in which both within-task and between-task classifiers performed better than chance. This combination was relatively unlikely to occur by chance, even with modest performance thresholds. P-values were computed for each region/time bin pair using the permutation method and corrected for multiple comparisons across the experiment. Region/time bin pairs were considered significant if the familywise error rate (FWER) was less than 0.05. Among region-time bin pairs that survived the performance threshold, we investigated whether better than chance classifier performance was driven by the coding of valence or simply distinguishing emotion from non-emotion by comparing the proportion of emotion stimuli (positive and negative) that were classified with the correct valence as compared to the opposite valence using a one-sided binomial test. An analogous procedure was used to compare the proportion of emotion stimuli correctly classified vs misclassified as neutral, and among the misclassified emotion stimuli, the proportion labeled with the opposite valence as compared to the proportion misclassified as neutral.

Results

We recorded intracranial field potential signals in 14 participants (age 25–58, 6 female). The average reaction time across subjects was 986 ± 239 ms (mean ± SD, WD) and 871 ± 315 ms (FA). There was no significant effect of emotional valence for either the WD task (P = 0.675, ANOVA test) or the FA task (P = 0.220, ANOVA test). Reaction time was significantly shorter for faces than for words (P < 0.001). Collectively, there were 947 intracerebral or subdural electrodes (Figure 3). High-gamma band (80–150 Hz) responses relative to pre-stimulus baseline were computed for each electrode. An example of HGR from an electrode in the left medial orbitofrontal cortex is shown in Figure 4 and Supplementary Figure S1. The neurophysiological responses from this electrode revealed a partial separation among the three emotional valences, particularly within the first second after stimulus onset, both for the FA task (Figure 4A) and for the WD task (Figure 4B). Notably, despite the large stimulus differences between the two tasks, the responses from this electrode were qualitatively similar between the two tasks: there was an increased HGR to negative (red) and neutral (black) stimuli compared to positive stimuli (green).
Fig. 3.

Locations of all 947 electrodes transformed into standard coordinate space and plotted together on Freesurfer’s average brain template. A. Surface electrodes. B. Depth electrodes (depicted with transparent cortical surfaces).

Fig. 4.

Example electrode in the left mOFC showing high-gamma responses (DHG, normalized by the pre-stimulus baseline, Methods) in the face task (A) and word task (B). Responses are aligned to stimulus onset at Time = 0. Red = negative, black = neutral, green = positive. Shaded error bars indicate standard error of the mean (n = 24 trials). The location of the electrode is depicted on the freesurfer average brain template adjacent to the plots.

Locations of all 947 electrodes transformed into standard coordinate space and plotted together on Freesurfer’s average brain template. A. Surface electrodes. B. Depth electrodes (depicted with transparent cortical surfaces). Example electrode in the left mOFC showing high-gamma responses (DHG, normalized by the pre-stimulus baseline, Methods) in the face task (A) and word task (B). Responses are aligned to stimulus onset at Time = 0. Red = negative, black = neutral, green = positive. Shaded error bars indicate standard error of the mean (n = 24 trials). The location of the electrode is depicted on the freesurfer average brain template adjacent to the plots. An ANOVA was performed to test whether HGR discriminated between the three valence conditions for each electrode, time bin and task. This involved 4290 statistical tests (3 time bins × 2 tasks × 715 electrodes). At a statistical threshold of P < 0.05, there were 222 significant tests (5.2% of the total), which is about what would be expected by chance. Because of the trial-to-trial variability in individual electrode responses, the small number of trials and the large number of electrodes, we used a classifier analysis based on ensembles of electrodes. Classifiers were trained to associate emotional valence labels for each trial with HGR data in three consecutive 500-ms time bins starting at stimulus onset. The procedure is able to determine in a data-driven way which electrodes and trials are most useful for classification. The classifiers were trained using cross-validation by randomly selecting a subset of the trials for a given emotional valence and task for training and testing its performance on the remaining trials (within-task classifier, Methods). We examined each brain region containing at least five electrodes with an aim to identify brain regions in which HGR appeared sensitive to emotion independent of task in at least one of the three time bins. We defined this as better than chance classifier performance on both tasks individually and on at least one of the two between-task classifiers (training on words and testing on faces or vice versa). Among the 947 electrodes, 753 were localized to the amygdala, hippocampus or one of the cortical regions in the Desikan–Killiani atlas (most of the remaining electrodes were in white matter). Among these regions, there were 40 regions with at least five electrodes that were submitted for further analysis (Table 1). Because a language task was used, it was considered probable that some effects would be lateralized, and, thus, homologous regions from the two hemispheres were considered separately. Classifier accuracy (performance) was then tested on trials that were left out of the training set (Figure 2). The left medial orbitofrontal cortex (mOFC) during time bin 1 and the left middle temporal gyrus (MTG) during time bin 2 showed significantly better than chance classifier performance for both tasks individually and between tasks when trained on words and tested on faces for the high-gamma frequency band (P < 0.005; Figures 5 and 6; Supplementary Table S1 for findings in other frequency bands). In both cases, the classifier trained on words performed better than chance when tested on both words and faces. The classifier trained on faces performed better than chance when tested on faces but did not exceed threshold when tested on words. Mean linear coefficients are depicted in Supplementary Figure S2. Because these two regions contained markedly different numbers of electrodes (6 in the left mOFC, 76 in the left MTG), the left MTG was re-analyzed for time bin 2 (500–1000 ms) using different random subsamples of six electrodes from this region for each iteration of the classifier. With electrode subsampling, the findings were no longer significant in this region, suggesting that there are subsets of electrodes that drive the classification performance. Electrode weights, as estimated from the absolute value of the mean linear coefficients, are depicted by location in Supplementary Figure S3.
Table 1.

Collective number of electrodes in each brain region

RegionN electrodesRegionN electrodesRegionN electrodes
Amygdala-L 8 lateralorbitofrontal-L 20 rostralmiddlefrontal-L 30
Amygdala-R 7 lateralorbitofrontal-R 8 rostralmiddlefrontal-R 13
Hippocampus-L 6 lingual-L 10 superiorfrontal-L 13
Hippocampus-R 6 lingual-R1 superiorfrontal-R 5
bankssts-L 9 medialorbitofrontal-L 6 superiorparietal-L3
bankssts-R2medialorbitofrontal-R3superiorparietal-R4
caudalanteriorcingulate-L2 middletemporal-L 76 superiortemporal-L 66
caudalanteriorcingulate-R0 middletemporal-R 16 superiortemporal-R 12
caudalmiddlefrontal-L 8 parahippocampal-L 10 supramarginal-L 39
caudalmiddlefrontal-R 6 parahippocampal-R4 supramarginal-R 16
entorhinal-L 8 parsopercularis-L 20 temporalpole-L 10
entorhinal-R0 parsopercularis-R 6 temporalpole-R2
frontalpole-L0 parsorbitalis-L 13 transversetemporal-L1
frontalpole-R2parsorbitalis-R2transversetemporal-R0
fusiform-L 28 parstriangularis-L 13 cuneus-L0
fusiform-R 9 parstriangularis-R 6 cuneus-R0
inferiorparietal-L 19 postcentral-L 42 isthmuscingulate-L0
inferiorparietal-R 10 postcentral-R 13 isthmuscingulate-R0
inferiortemporal-L 49 precentral-L 30 paracentral-L0
inferiortemporal-R 16 precentral-R 18 paracentral-R0
insula-L3precuneus-L1pericalcarine-L0
insula-R2precuneus-R1pericalcarine-R0
lateraloccipital-L 15 rostralanteriorcingulate-L1posteriorcingulate-L0
lateraloccipital-R3rostralanteriorcingulate-R1posteriorcingulate-R0

Bolded regions contained ≥5 electrodes and were included in the analyses.

Fig. 2.

HGRs from each of five electrodes in a particular brain region is entered as training data into a classifier along with condition labels for each trial. The trained classifier is then tested with data from three trials that were not included in the training set (one trial for each of the three conditions). This procedure is repeated until all trials have been tested, and the classifier performance is calculated as the percentage of trials correctly classified.

Collective number of electrodes in each brain region Bolded regions contained ≥5 electrodes and were included in the analyses. As the classifier labeled trials from among three categories, better than chance performance could be achieved even if only one of the three categories could be discriminated from the other two. Neutral stimuli lack emotional content and are qualitatively different from the other two categories for this reason. Thus, we explored whether the classifier’s success in mOFC and MTG depended only on an ability to discriminate emotion from no emotion or whether positive stimuli could be correctly discriminated from negative stimuli. We found that across the four classifier analyses (the two within-task analyses and the two between-task analyses), the emotion stimuli (positive and negative trials) were correctly labeled more often than they were labeled with the opposite emotional valence, both in the left mOFC during bin 1 (81 correct emotion labels (46% of emotion trials), 39 incorrect emotion labels (22%), P < 0.0001) and in the left MTG during bin 2 (75 correct emotion labels (43%), 50 incorrect emotion labels (28%), P = 0.016; Figure 7; see Supplementary Figure S4 for full confusion matrices).
Fig. 7.

To evaluate whether the classifier’s success in mOFC and MTG depended only on an ability to discriminate emotion from no emotion (neutral stimuli) or whether positive stimuli could be correctly discriminated from negative stimuli, we examined the misclassification pattern among emotion stimuli (positive and negative faces and words). Combined across the four classifiers (FA-FA, FA-WD, WD-FA, WD-WD), emotion stimuli were more likely to be classified with the correct valence (EC) than with the opposite valence (EO) in the L mOFC and L MTG, indicating that the signal contained information discriminating the two emotional valences from each other. Emotion stimuli were also more likely to be classified correctly than misclassified as neutral. EC = emotion stimuli classified correctly; EN = emotion stimuli misclassified as neutral; EO = emotion stimuli misclassified as the opposite emotion valence; * = P < 0.05; ** = P < 0.0001.

Electrode locations in the left mOFC (red) and left MTG (brown). Brain regions colored dark gray were included in the analyses but did not show significant results. Brain regions colored light gray were excluded from analysis due to inadequate electrode coverage (<5 electrodes). Within-task and cross-task mean classifier performance (standard deviations in parentheses) for the L mOFC in bin 1 and the L MTG in bin 2. These two regions showed better than chance performance for the within-task classifiers for both words and faces as well as one of the cross-task classifiers. Performance colored red indicates it exceeds the significance threshold for P < 0.05. On the color bar, the white line indicates chance performance (33.3%), and the red line indicates the threshold for performance significantly better than change (P < 0.05). NOTE: This figure requires color. To evaluate whether the classifier’s success in mOFC and MTG depended only on an ability to discriminate emotion from no emotion (neutral stimuli) or whether positive stimuli could be correctly discriminated from negative stimuli, we examined the misclassification pattern among emotion stimuli (positive and negative faces and words). Combined across the four classifiers (FA-FA, FA-WD, WD-FA, WD-WD), emotion stimuli were more likely to be classified with the correct valence (EC) than with the opposite valence (EO) in the L mOFC and L MTG, indicating that the signal contained information discriminating the two emotional valences from each other. Emotion stimuli were also more likely to be classified correctly than misclassified as neutral. EC = emotion stimuli classified correctly; EN = emotion stimuli misclassified as neutral; EO = emotion stimuli misclassified as the opposite emotion valence; * = P < 0.05; ** = P < 0.0001.

Discussion

Social interactions constitute the essential fabric of daily experience. Social interactions depend on each individual’s ability to recognize emotions expressed by others either verbally or non-verbally. Here, we sought to identify neural substrates of emotional valence and to assess whether those neural substrates represent abstract emotional concepts or task-specific signals. Consistent with earlier work, we found that neural responses could distinguish between different emotional valences (Figure 2) both in a task involving language and a task involving faces (Figure 1). We used a machine learning classifier to quantify the extent to which emotional valence could be read out in single trials (Figure 3). The classifier was able to discriminate emotional valence when trained and tested on different partitions of the trials within each task, consistent with a body of earlier work demonstrating task-specific representation of emotional valence throughout multiple brain regions. Two brain regions, the MTG and OFC stood out from the rest because their representation allowed the classifier to extrapolate between tasks (Figure 5).
Fig. 5.

Electrode locations in the left mOFC (red) and left MTG (brown). Brain regions colored dark gray were included in the analyses but did not show significant results. Brain regions colored light gray were excluded from analysis due to inadequate electrode coverage (<5 electrodes).

The sequence of cortical activation involved in the processing of a stimulus generally follows a pathway beginning in primary sensory cortex and propagating to higher cortical areas with prominent feedback modulation at multiple stages of processing as features of the stimulus are decoded. Different types of emotional stimuli may require distinct processing steps to decode the emotional valence. For example, within the visual modality, some investigators have suggested that the analysis of low-frequency visual features of fearful facial expressions may be adequate to activate the amygdala via a magnocellular retinal-collicular-pulvinar pathway that bypasses visual cortex (Vuilleumier ). In contrast, representing the emotional content in printed words requires fine-grained decoding of high spatial-frequency information to represent the visual word form, and lexicosemantic transformation to decode word meaning that likely involves peri-Sylvian language areas (Weisholtz ). Emotional content in stimuli can modulate activity at multiple stages of processing specific to a particular task, including areas of language cortex (Beauregard ; Maddock ; Cato ; Kuchinke ), visual cortex (Vuilleumier ; Pessoa ) and auditory cortex (Sander and Scheich, 2001; Grandjean ; Liebenthal ). It is clear that emotion impacts the perceptual/cognitive processing stream in a manner that is to some extent dependent on the particularities of the stimuli used to convey the emotion. While these neural changes can be utilized by a machine learning classifier to decode emotional valence categories, it is unclear if emotion is truly coded in these perceptual/cognitive areas or if the neural changes reflect augmentation of perceptual/cognitive processing of the emotional stimuli. At a basic level, emotional valence has meaning independent of task or stimulus type. Classifier decoding analysis can be used to identify valence-related information in a neural signal that is independent of the particular task or stimulus type if a classifier trained on one task is able to decode the stimulus valence from a qualitatively different stimulus set. There was little similarity in the sensory inputs between the verbal stimuli in the WD task and the face images in the FA task, aside from the fact that the emotional valence of the stimuli could be broadly categorized as negative, neutral and positive. Nevertheless, despite the heterogeneity between the two tasks, we found that some classifiers could not only read out valence information within a task but also across tasks. Specifically, within the mOFC and MTG, classifiers trained exclusively within the WD task were able to extract valence information when considering neural responses during the FA task on which they were not trained. This indicates that the mOFC and MTG may represent emotional valence-related information independent of the representation of the particular stimuli or the manner in which that emotional content is discerned from the stimuli (rapid visual detection vs lexicosemantic transformation). Thus, the between-task extrapolation effect is likely not driven simply by emotional modulation of circuitry involved in processing facial identity or word meaning. The lack of symmetry between the two between-task classifiers was interesting but not necessarily surprising. Our criteria for identifying task-invariant valence information required better than chance decoding within each task as well as with at least one of the two between-task classifiers. It was not expected that just because training on one task allowed for successful decoding in the other task that the converse must also be true. One possible explanation for the asymmetry is differing SNR ratios between the two tasks. A classifier trained on a task with higher SNR may perform better when tested on a lower SNR dataset than the converse. Both the mOFC and the MTG are high-order multimodal cortical association areas that have been implicated in emotional processing. The OFC has been closely linked with processing of emotion-related information supporting goal-directed behavior. It has been proposed that OFC represents changing and relative reward values (Kringelbach and Rolls, 2004) and that it may represent the reward and punishment value of primary as well as learned reinforcers, allowing for behavior change to occur when reinforcement values change (Rolls, 2000). Thus, the OFC appears to monitor the affective properties of stimuli from various sensory modalities and is therefore ideally situated to process valence in a task-invariant manner. The OFC has been implicated in the processing of both emotional facial expressions and emotion words. Ventral frontal lobe damage can lead to impairment in identification of facial expressions even in patients who were not impaired in facial recognition (Hornak ). Bilateral OFC lesions can also cause impairment in emotional voice discrimination (Hornak ). Magnetoencephalography can detect early involvement of the OFC in processing affectively charged visual scenes (Rudrauf ) and phase-locking between the OFC and amygdala in response to emotional facial expressions (Cushing ). The emotional valence of written words can also modulate OFC activation seen with functional MRI (Lewis ). The middle temporal gyrus is a multimodal association area on the lateral temporal lobe bounded inferiorly and posteriorly by visual association cortex and superiorly by auditory association cortex. Lesions to this region can lead to deficits in word comprehension and naming (Dronkers ) and its functional and structural connectivity with peri-Sylvian language areas position it as an important region for language comprehension (Turken and Dronkers, 2011) and possibly semantic processing more generally (Binder and Desai, 2011). Functional imaging studies have shown that emotional content in words can modulate MTG activity (Beauregard ; Cato ; Weisholtz ). The posterior portion of the MTG and adjacent superior temporal sulcus (STS) are also involved in face processing and have been implicated, in particular, in perception of facial expression (Haxby , 2002; Said ). It has been proposed that the posterior MTG/STS represents changeable aspects of faces independent of facial identity (Haxby ), although the notion of two truly dissociable systems for the recognition of facial identity and facial expression has been questioned (Calder and Young, 2005). In a human iEEG study, a decoding analysis was able to discriminate fearful and happy facial expressions using information from the high-gamma band and below 30 Hz in the lateral and inferior temporal cortex, although performance was better in the inferior temporal cortex, contrary to prediction (Tsuchiya ). Emotional scenes (Sabatinelli ) and emotional gestures (Grosbras and Paus, 2006; Flaisch ) have also been shown to modulate activity in portions of lateral temporal neocortex. The variety of types of emotional stimuli that engage the MTG may indicate an emotion function independent of stimulus type or task, but it is also possible that emotion modulates various types of stimulus representations in the MTG and adjacent regions. The fact that the MTG classifier could decode facial expressions when trained on word valence may indicate regions of MTG that can represent emotional valence more generally. Alternatively, emotional valence may modulate representations of words and faces that have overlapping anatomical fields, at least within the spatial resolution of an iEEG electrode. Variability in the neural response to stimuli in the same category (with the same emotional valence label) can occur due to noise in the signal, differences in degree to which different stimuli evoke the emotional connotations they are intended to evoke and distractions during the task. Typically, such variability is dealt with by averaging across trials, which assigns equal weight to each trial. This approach risks missing the signal within the noise when there is a small number of trials. The classifier analysis allows decoding at the single trial level and is sensitive to relevant information in the signal, even with a small number of trials, as trials (and electrodes) containing information relevant to the condition labels can be weighted more strongly than those that do not. Similarly, responses may vary from electrode-to-electrode within a brain region due to a variety of factors, such as electrode artifact, epileptiform activity, or anatomic distributions that do not map properly onto the gyral patterns reflected in the Desikan–Killiany atlas, and averaging across electrodes within a region may obscure findings by assigning equal weight to relevant and irrelevant electrodes. Decoding analysis uses a data-driven approach to assign weight to the most informative electrodes and trials at the expense of some loss of temporal and spatial precision. The MTG is a considerably larger region than the mOFC, and in our study, there were considerably more electrodes covering the left MTG than the left mOFC (76 vs 6). We repeated the analyses of the left MTG randomly subsampling the electrodes down to 6 with each iteration of the classifier so as to equalize the amount of data and make the performance results more comparable between the two regions. However, this resulted in the MTG classifiers no longer performing better than chance. While the MTG was fairly well-covered by electrodes (Figure 5), the region is functionally heterogenous, and it is likely that all electrodes did not contribute equally to the classifier performance. Randomly sampling only 6 out of the 76 electrodes likely did not consistently include enough relevant electrodes to mirror the performance of the classifiers that included all 76 electrodes. The classifier was trained to distinguish three different valence categories, but the classifier performances we report could have been achieved even if the classifier was only able to distinguish one of the stimulus types from the other two. For example, if a signal distinguishes neutral stimuli from emotion-laden stimuli but represents positive and negative valence similarly, a classifier might decode neutral stimuli very successfully but could achieve, at best, 50% performance decoding the positive and neutral stimuli. In this scenario, a classifier could achieve, in principle, an overall performance level as high as 67%. We investigated this possibility and found that within the left mOFC and left MTG, emotion trials were more likely to be labeled with the correct emotional valence than the incorrect emotional valence, suggesting that positive and negative emotions can be discriminated from each other in these regions. Thus, the neural signal contains information about emotional valence and not just the presence or absence of emotional content. Limitations of this study included the low number of trials per condition and variable electrode locations across participants. As with any study employing invasive human brain recordings, the participants are limited to a clinical population (in this case, patients with epilepsy) in whom neurophysiological properties can differ from healthy individuals. The low number of trials likely contributed to unconvincing findings at the single electrode level. While the classifier analysis allowed for the identification of task-invariant emotional valence encoding, the need to bin signals across time and combine electrodes within brain regions limited the spatial and temporal specificity of the findings. HGA was used as a metric of brain activity based on a body of evidence demonstrating consistent and well-localized task-related activation of sensorimotor and language areas, but additional valence-related information is likely encoded in other frequency bands as well (Supplementary Table S1). The amygdala is known to be involved in representing emotional properties of experimental stimuli but did not appear as a significant finding in the primary analysis of this study. It is possible that with a greater number of trials or amygdala electrodes, such an effect may have been detected, but it also may be that amygdala activity contains more valence-relevant information at other frequency bands. In fact, when other frequency bands were examined in a secondary analysis, the right amygdala showed significant task-invariant valence information in the low gamma band (30–80 Hz) during bin 1 (P = 0.035; Supplementary Table S1). The between-task classifier appears to be a promising approach for the identification of task-invariant information in neural signals, but further research is needed to clarify the temporal dynamics of these signals as well as the spatial specificity. Additionally, different parts of the brain may carry information in different frequency bands, and further research is needed to understand the relationships between frequency band, brain location, and task.

Conclusions

Viewing negatively valenced, positively valenced and neutral stimuli evoked changes in the high-gamma band that differentiated between the three valence conditions in the left mOFC and left MTG. The signal in these regions contains valence-related information that is independent of the method by which the emotional valence is conveyed (e.g. via facial expression or words) by showing that a classifier trained to decode emotion from words can perform better than chance when decoding emotion from facial expressions, even when it has not been trained on facial expression data at all. The results suggest that mOFC and MTG encode general stimulus-independent valence-related information that can be applied in different contexts and may provide a mechanism by which qualitatively different items can be compared based on emotional valence. Click here for additional data file.
  54 in total

Review 1.  The orbitofrontal cortex and reward.

Authors:  E T Rolls
Journal:  Cereb Cortex       Date:  2000-03       Impact factor: 5.357

2.  The neural substrate for concrete, abstract, and emotional word lexica a positron emission tomography study.

Authors:  M Beauregard; H Chertkow; D Bub; S Murtha; R Dixon; A Evans
Journal:  J Cogn Neurosci       Date:  1997-07       Impact factor: 3.225

3.  iELVis: An open source MATLAB toolbox for localizing and visualizing human intracranial electrode data.

Authors:  David M Groppe; Stephan Bickel; Andrew R Dykstra; Xiuyuan Wang; Pierre Mégevand; Manuel R Mercier; Fred A Lado; Ashesh D Mehta; Christopher J Honey
Journal:  J Neurosci Methods       Date:  2017-02-10       Impact factor: 2.390

4.  The dorsomedial prefrontal cortex computes task-invariant relative subjective value for self and other.

Authors:  Matthew Piva; Kayla Velnoskey; Ruonan Jia; Amrita Nair; Ifat Levy; Steve Wc Chang
Journal:  Elife       Date:  2019-06-13       Impact factor: 8.140

5.  Neural correlates of processing valence and arousal in affective words.

Authors:  P A Lewis; H D Critchley; P Rotshtein; R J Dolan
Journal:  Cereb Cortex       Date:  2006-05-12       Impact factor: 5.357

6.  Intracerebral γ modulations reveal interaction between emotional processing and action outcome evaluation in the human orbitofrontal cortex.

Authors:  Julien Jung; Dimitri Bayle; Karim Jerbi; Juan R Vidal; Marie-Anne Hénaff; Tomas Ossandon; Olivier Bertrand; François Mauguière; Jean-Philippe Lachaux
Journal:  Int J Psychophysiol       Date:  2010-10-08       Impact factor: 2.997

7.  Beyond the amygdala: Linguistic threat modulates peri-sylvian semantic access cortices.

Authors:  Daniel S Weisholtz; James C Root; Tracy Butler; Oliver Tüscher; Jane Epstein; Hong Pan; Xenia Protopopescu; Martin Goldstein; Nancy Isenberg; Gary Brendel; Joseph LeDoux; David A Silbersweig; Emily Stern
Journal:  Brain Lang       Date:  2015-11-11       Impact factor: 2.381

8.  Emotion processing in words: a test of the neural re-use hypothesis using surface and intracranial EEG.

Authors:  Aurélie Ponz; Marie Montant; Catherine Liegeois-Chauvel; Catarina Silva; Mario Braun; Arthur M Jacobs; Johannes C Ziegler
Journal:  Soc Cogn Affect Neurosci       Date:  2013-03-11       Impact factor: 3.436

Review 9.  Lesion analysis of the brain areas involved in language comprehension.

Authors:  Nina F Dronkers; David P Wilkins; Robert D Van Valin; Brenda B Redfern; Jeri J Jaeger
Journal:  Cognition       Date:  2004 May-Jun

Review 10.  Modulation of visual processing by attention and emotion: windows on causal interactions between human brain regions.

Authors:  Patrik Vuilleumier; Jon Driver
Journal:  Philos Trans R Soc Lond B Biol Sci       Date:  2007-05-29       Impact factor: 6.237

View more
  1 in total

Review 1.  Clinical neuroscience and neurotechnology: An amazing symbiosis.

Authors:  Andrea Cometa; Antonio Falasconi; Marco Biasizzo; Jacopo Carpaneto; Andreas Horn; Alberto Mazzoni; Silvestro Micera
Journal:  iScience       Date:  2022-09-16
  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.