Keith A Bush1, Clinton D Kilts1. 1. Brain Imaging Research Center, Department of Psychiatry, University of Arkansas for Medical Sciences, Little Rock, Arkansas, United States of America.
Abstract
In this study we merged methods from machine learning and human neuroimaging to test the role of self-induced affect processing states in biasing the affect processing of subsequent image stimuli. To test this relationship we developed a novel paradigm in which (n = 40) healthy adult participants observed affective neural decodings of their real-time functional magnetic resonance image (rtfMRI) responses as feedback to guide explicit regulation of their brain (and corollary affect processing) state towards a positive valence goal state. By this method individual differences in affect regulation ability were controlled. Attaining this brain-affect goal state triggered the presentation of pseudo-randomly selected affectively congruent (positive valence) or incongruent (negative valence) image stimuli drawn from the International Affective Picture Set. Separately, subjects passively viewed randomly triggered positively and negatively valent image stimuli during fMRI acquisition. Multivariate neural decodings of the affect processing induced by these stimuli were modeled using the task trial type (state- versus randomly-triggered) as the fixed-effect of a general linear mixed-effects model. Random effects were modeled subject-wise. We found that self-induction of a positive valence brain state significantly positively biased valence processing of subsequent stimuli. As a manipulation check, we validated affect processing state induction achieved by the image stimuli using independent psychophysiological response measures of hedonic valence and autonomic arousal. We also validated the predictive fidelity of the trained neural decoding models using brain states induced by an out-of-sample set of image stimuli. Beyond its contribution to our understanding of the neural mechanisms that bias affect processing, this work demonstrated the viability of novel experimental paradigms triggered by pre-defined cognitive states. This line of individual differences research potentially provides neuroimaging scientists with a valuable tool for exploring the roles and identities of intrinsic cognitive processing mechanisms that shape our perceptual processing of sensory stimuli.
In this study we merged methods from machine learning and human neuroimaging to test the role of self-induced affect processing states in biasing the affect processing of subsequent image stimuli. To test this relationship we developed a novel paradigm in which (n = 40) healthy adult participants observed affective neural decodings of their real-time functional magnetic resonance image (rtfMRI) responses as feedback to guide explicit regulation of their brain (and corollary affect processing) state towards a positive valence goal state. By this method individual differences in affect regulation ability were controlled. Attaining this brain-affect goal state triggered the presentation of pseudo-randomly selected affectively congruent (positive valence) or incongruent (negative valence) image stimuli drawn from the International Affective Picture Set. Separately, subjects passively viewed randomly triggered positively and negatively valent image stimuli during fMRI acquisition. Multivariate neural decodings of the affect processing induced by these stimuli were modeled using the task trial type (state- versus randomly-triggered) as the fixed-effect of a general linear mixed-effects model. Random effects were modeled subject-wise. We found that self-induction of a positive valence brain state significantly positively biased valence processing of subsequent stimuli. As a manipulation check, we validated affect processing state induction achieved by the image stimuli using independent psychophysiological response measures of hedonic valence and autonomic arousal. We also validated the predictive fidelity of the trained neural decoding models using brain states induced by an out-of-sample set of image stimuli. Beyond its contribution to our understanding of the neural mechanisms that bias affect processing, this work demonstrated the viability of novel experimental paradigms triggered by pre-defined cognitive states. This line of individual differences research potentially provides neuroimaging scientists with a valuable tool for exploring the roles and identities of intrinsic cognitive processing mechanisms that shape our perceptual processing of sensory stimuli.
Our capacity to process and regulate emotions is central to our ability to optimize psychosocial functioning and quality of life [1]. As a corollary, disruptions in emotion processing and regulation are broadly ascribed to psychiatric illnesses including borderline personality disorder, depression, anxiety disorders, PTSD, and substance-use disorders [2] which negatively impact quality of life and functioning [3, 4]. In light of this, scientists and clinicians seek to both develop and understand mental strategies that volitionally reduce negatively biased emotional states. Neuroimaging, in particular, has provided critical insight into the functional neurocircuits involved in efficacious emotion regulation strategies [5, 6]. However, the basic neurobiological mechanisms by which mental strategies induce adaptive emotion processing over time remain elusive.Research into the effects of temporal context on affect and emotion processing may have implications for increasing our understanding of the neural bases of emotion regulation. Prior work has demonstrated that changing affective context prior to an emotional target shapes the processing of that target. Such priming effects both accelerate and weaken the emotional response to affectively congruent target stimuli [7]. Manipulations of affect processing state impact the temporal structure of the neural responses to subsequent affective image stimuli [8] as well as the corollary psychophysiological responses to those stimuli [9, 10]. Further, stimulus-cued emotion processing states bias the self-reported perception of successive emotional stimuli [11].These findings are consistent with effects that would be predicted by the deployment of situational and attentional modification strategies according to the process model of emotion regulation [12] and point to potential mechanisms underlying emotion regulation-related changes to emotion processing. However, the neural representation of the observed ability of affective cognitions related to these strategies to bias subsequent emotional responses has not yet been tested. Thus, the primary aim of this work was to contribute to our knowledge of the mechanisms underlying emotion regulation (operationalized as affect regulation) by experimentally demonstrating that self-induced and verified affect processing states bias the affect processing of subsequent image stimuli.Real-time functional magnetic resonance imaging (rtfMRI), when used to generate brain activation feedback [13] (i.e., rtfMRI-guided neuromodulation or neurofeedback), reflects a promising methodology that has not to our knowledge been applied for mechanistic testing of how the neural correlates of such feedback-induced affect processing states bias subsequent affect processing. Here, the applied advantage of rtfMRI is that self-induced neurocognitive states (achieved via rtfMRI guidance) can be verified and used as independent experimental variables to trigger subsequent affective stimulus-response characterizations. Yet, a challenge to rtfMRI-guided neuromodulation studies, and brain computer interface (BCI) research in general, is the large individual variation observed in subjects’ ability to volitionally modulate their cognitive states–the well-known “BCI-illiteracy phenomenon” [14].Within BCI studies, neurophysiological and psychological variables (e.g., self-confidence and concentration) have been shown to significantly predict performance variation [15-17]. However, very little is known about the source of individual differences in the ability to volitionally regulate affective states. Therefore, the secondary aim of this project was to characterize individual variation in the ability to self-induce affective states using neurofeedback according to the subjects’ unguided self-induction ability. This research has direct clinical relevance to informing our understanding of the neuroregulation capabilities of psychiatric patients to identify those most or least capable of guided affect regulation.To explore our aims, we developed a novel task in which healthy adult participants utilized rtfMRI feedback to explicitly regulate their brain response and corollary affect processing states toward a goal of extreme pleasantness (i.e., positive valence). Attaining this brain-affect state triggered the presentation of an affectively congruent (positive valence) or incongruent (negative valence) image stimulus drawn from the International Affective Picture Set [18] (IAPS). Between regulation trials participants passively viewed (without regulation) IAPS stimuli associated with either positive or negative valence. We then compared image stimulus-cued brain and affective responses arising from explicitly self-induced feedback-facilitated positive valence states versus random affective states (passive viewing) and tested the ability of self-induced positive valence states to bias the affect processing of subsequent image stimuli.Our results reveal that self-induction of a positive affective state biases subsequent affect processing responses to image stimuli, suggesting a potential mechanism by which situational and attentional modification strategies work to reduce negatively biased affect processing states. We also found that individual differences in the intrinsic ability to self-induce affective arousal without guidance informed the attainment of self-induced positive valence in the presence of rtfMRI guidance, further supporting the established role of attentional deployment in explaining BCI performance.
Methods
Ethics statement
All participants provided written informed consent after receiving written and verbal descriptions of the study procedures, risks, and benefits. We performed all study procedures and analysis with approval and oversight of the Institutional Review Board at the University of Arkansas for Medical Sciences (UAMS) in accordance with the Declaration of Helsinki and relevant institutional guidelines and policies.
Participants
We enrolled healthy adult participants (n = 40) having the following demographic characteristics: age [mean(s.d.)]: 38.8(13.3), range 20‒65; sex: 22 (55%) female; race/ethnicity: 28 (70.%) self-reporting as White or Caucasian, 9 (22.5%) as Black or African-American, 1 (2.5%) as Asian, and 2 (5%) self-reporting as other; education [mean(s.d.)]: 16.8(2.2) years, range 12‒23; WAIS-IV IQ [mean(s.d.)]: 102.5(15.3), range 73‒129. All of the study’s participants were right-handed (assessed via Edinburgh Handedness Inventory [19]) native-born United States citizens who were medically healthy and exhibited no current Axis I psychopathology, including mood disorders, as assessed by the SCID-IV clinical interview [4]. All participants reported no current use of psychotropic medications and produced a negative urine screen for drugs of abuse (cocaine, amphetamines, methamphetamines, marijuana, opiates, and benzodiazepines) immediately prior to both the clinical interview and MRI scan. When indicated, we corrected participants’ vision to 20/20 using an MRI compatible lens system (MediGoggles™, Oxforshire, United Kingdom), and we excluded all participants endorsing color blindness.
Experiment design
Following the provision of informed consent, subjects visited the Brain Imaging Research Center (BIRC) of the University of Arkansas for Medical Sciences on two separate days. On Study Day 1 a trained research assistant assessed all subjects for major medical and psychiatric disorders as well as administered instruments to collect data to be used as either secondary variables hypothesized to explain individual variance in affect regulation-related neural activity, covariates of no interest, or to assess inclusion/exclusion criteria. The participant returned to the BIRC for Study Day 2 within 30 days after Study Day 1 to complete the MRI acquisition. During this day, the participant received task training and completed the full MRI acquisition protocol, depicted in Fig 1.
Fig 1
Study Day 2 experimental tasks: Order, number of repetitions, duration, and stimuli.
Tasks are colored by role. Gray depicts task training and application of psychophysiology recording apparatus. Blue depicts structural image acquisition. Orange depicts functional image acquisition. Identification and Modulation blocks of the fMRI acquisition summarize the relevant trial types used within that task (see Neuroimaging section for abbreviations). *Training of real-time multivariate pattern analysis predictive models was performed concurrently with the Resting State task of the fMRI acquisition.
Study Day 2 experimental tasks: Order, number of repetitions, duration, and stimuli.
Tasks are colored by role. Gray depicts task training and application of psychophysiology recording apparatus. Blue depicts structural image acquisition. Orange depicts functional image acquisition. Identification and Modulation blocks of the fMRI acquisition summarize the relevant trial types used within that task (see Neuroimaging section for abbreviations). *Training of real-time multivariate pattern analysis predictive models was performed concurrently with the Resting State task of the fMRI acquisition.
Training
Each participant received a video-based overview of the experiment to be performed on that day as well as training on the study’s task variations and trial types. The participant was offered the opportunity to use the restroom and then was moved to the MRI scanner room and fully outfitted with psychophysiological recording equipment.
Neuroimaging
For each subject we captured a registration scan and detailed T1-weighted structural image. We then acquired functional MRI data for three task variations: identification, resting state, and modulation. Identification (Id) task acquisition consisted of 2 x 9.4 min fMRI scans during which the participant was presented with 120 images drawn from the International Affective Picture System [18] (IAPS) to support one of two trial types (see Fig 2): 90 passive stimulus (PS) trials and 30 cued-recall (CR) trials. Identification task PS trials (abbreviated Id-PS) presented an image for 2 s (cue) succeeded by a fixation cross for a random inter-trial interval (ITI) sampled uniformly from the range 2–6 s. Identification task cued-recall (Id-CR) trials were multi-part: a cue image was presented for 2 s followed by an active cue response step for 2 s (the word “FEEL” overlaying the image) followed by the word FEEL alone for 8 s, which signaled the participant to actively recall and re-experience the affective content of the cue image, followed by a 2–6 s ITI. During pre-scan training on the Id-CR task’s recall condition, subjects were instructed to “Imagine the last picture you saw as best you can. Try to make yourself feel exactly how you felt when you saw this picture the first time. Hold that feeling the whole time you see the word FEEL.” Within each scan, Id-PS and Id-CR trials were pseudo-randomly sequentially ordered to minimize correlations between the hemodynamic response function (HRF)-derived regressors of the tasks. This order was fixed for all subjects.
Fig 2
Summary of experimental task trial designs.
(Id-PS): Identification task passive stimulus trials, which were identical to Modulation task passive stimulus (Mod-PS) trials. (Id-CR): Identification task cued-recall trials. (Mod-FS): Modulation task feedback-triggered stimulus trials. (Bottom): Depiction of a hypothetical Mod-FS trial for the experimental design. The dashed line represents the trigger threshold and bounds the hyperplane distance at which the cue stimulus will be triggered by the real-time valence estimate as a function of time. As depicted, this threshold decreases linearly to zero commencing at 20 s of feedback. This trial type is of fixed length; therefore, the ITI duration is a function of the time required to trigger the stimulus via feedback. If the real-time valence estimate does not surpass the trigger threshold prior to the threshold reaching zero then the stimulus is triggered by default, denoted “Emergency trigger”, followed by the minimum ITI.
Summary of experimental task trial designs.
(Id-PS): Identification task passive stimulus trials, which were identical to Modulation task passive stimulus (Mod-PS) trials. (Id-CR): Identification task cued-recall trials. (Mod-FS): Modulation task feedback-triggered stimulus trials. (Bottom): Depiction of a hypothetical Mod-FS trial for the experimental design. The dashed line represents the trigger threshold and bounds the hyperplane distance at which the cue stimulus will be triggered by the real-time valence estimate as a function of time. As depicted, this threshold decreases linearly to zero commencing at 20 s of feedback. This trial type is of fixed length; therefore, the ITI duration is a function of the time required to trigger the stimulus via feedback. If the real-time valence estimate does not surpass the trigger threshold prior to the threshold reaching zero then the stimulus is triggered by default, denoted “Emergency trigger”, followed by the minimum ITI.During resting state acquisition, we acquired 7.5 min of fMRI data in which the subject performed mind-wandering with eyes open while observing a fixation cross. During training, subjects were instructed to “Keep your eyes open, look at the cross in front of you, and let your brain think whatever it wants to.” Concurrently with the resting state task, the real-time variant of the multivoxel pattern analysis (MVPA) prediction model (see below) was fit using data drawn from the Identification task fMRI data to define individual brain state representations of the affect processing goal.Modulation (Mod) task acquisition consisted of 2 x 10.5 min fMRI scans during which the participant was presented with 60 IAPS images according to two trial types (see Fig 2): 40 passive stimulus (Mod-PS) trials, which were identically formatted to the Id-PS trials, and 20 feedback-triggered stimulus (Mod-FS) trials. Mod-FS trials used real-time fMRI feedback of the subject’s decoded affective state to guide them in self-inducing affective brain states associated with their individualized representation of extreme positive valence. The computer system monitored the subject’s decoded valence processing level at each acquisition volume of fMRI data and if that decoding met pre-defined criteria (i.e., the goal state, which we defined as hyperplane distance ≥ 0.8 for 4 consecutive EPI volumes) then a positively (congruent) or negatively (incongruent) valent image stimulus was triggered as the test stimulus. The brain state criteria representing the affect processing goal state were determined by the results of an initial pilot of the experiment to identify acquisition parameters that were challenging but consistently reachable. Within each scan, Mod-PS and Mod-FS trials were pseudo-randomly sequentially ordered to minimize correlations between the hemodynamic response function (HRF)-derived regressors of the tasks. This order was fixed for all subjects.We provided real-time visual feedback during Mod-FS trials by manipulating the level of transparency of the word FEEL, which was the cue to volitionally regulate affect to an extreme positive valence. The transparency of the text was scaled to reflect real-time estimates of subject’s represented valence processing with respect to the desired hyperplane distance threshold. This was achieved by mapping MVPA prediction model hyperplane distances (see below) from their base range [-1.25,1.25] to the range of possible transparencies, α ϵ [0,1]. Fully transparent text (α = 0) appeared as a black screen and denoted poor affect regulation performance, i.e., highly negative valence. Fully opaque text (α = 1) appeared bright yellow and denoted good performance. The transparency of the text was reset every 2 s (reflecting the momentary hyperplane distance prediction based upon each EPI volume, TR = 2000 ms). The transparency was adjusted (approximately 20 frames-per-second) to present smooth transitions toward the brain-affect goal state. The initial hyperplane distance threshold was fixed for 20 seconds. If the subject had not attained the threshold (i.e. triggered the test stimulus) by this time then the threshold was linearly and continuously lowered to 0 over the subsequent 18 s at which point the stimulus was automatically triggered even if the threshold had not been attained (Fig 2).
Stimulus selection
We sampled 180 IAPS images to use as affect processing induction stimuli. Identification task stimuli were sampled computationally using a previously published algorithm [20] that selects images such that the subspace of the valence-arousal plane for normative scores within the IAPS dataset is maximally spanned (see Fig 3). This property guarantees the most diverse range of valence and arousal properties for a fixed-sized stimulus set. We performed this full-range sampling process first for the 90 images used in Id-PS trials. The IAPS identifiers of these images were previously reported [21]. We then separately (but similarly) sampled an additional 30 images for use in Id-CR trials. The IAPS identifiers of these images were also previously reported [22]. Next, we constructed extreme polar subsets of positively and negatively valenced image stimuli by constructing thresholds of permissible valence and arousal scores. Valence (v) was constrained such that: v≥7 or v≤2.6. We then iteratively constrained the permissible arousal scores until we identified positively and negatively valent image subsets that did not exhibit a group mean difference in arousal, a, scores (found to be 4.6 < a < 6.8) thereby controlling for arousal response as a stimulus subset variable. We then sampled 30 images each from these subsets and uniformly randomly assigned these images to Mod-PS trials (n = 40) and Mod-FS trials (n = 20), respectively. The outcomes of these sampling and assignment processes are presented in Fig 3. The specific IAPS identities of these images are reported in S1 Table.
Fig 3
Normative valence and arousal scores for stimuli selected for each of the four experimental trial types.
Summary statistics for Identification task stimuli are as follows: Id-PS valence [mean (std. dev)] 5.04 (1.95); Id-PS arousal [mean (std. dev)] 4.95 (1.40); Id-CR valence [mean (std. dev)] 5.30 (1.95); Id-CR arousal [mean (std. dev)] 4.99 (1.51). There were no significant differences in affect properties between the Id-PS and Id-CR cue stimuli for either valence (p = .49; signed rank; α = .05; h0: μ1 = μ2) or arousal (p = .86; rank-sum; α = .05; h0: μ1 = μ2). Summary statistics for the Modulation task stimuli are as follows. Mod-PS (pos. valence cluster) valence [mean (std. dev)] 7.41 (.30); Mod-PS (neg. valence cluster) valence [mean (std. dev)] 2.08 (.36); Mod-FS (pos. valence cluster) valence [mean (std. dev)] 7.35 (0.32); Mod-FS (neg. valence cluster) valence [mean (std. dev)] 2.03 (0.41). Between the Mod-PS and Mod-FS stimuli in the positive valence cluster, there were no significant differences in valence (p = .60; rank-sum; α = .05; h0: μ1 = μ2) nor arousal (p = .25; rank-sum; α = .05; h0: μ1 = μ2). There were also no significant group differences in affect properties between the Mod-PS and Mod-FS stimuli in the negative valence cluster, either for valence (p = .74; rank-sum; α = .05; h0: μ1 = μ2) or arousal (p = .54; rank-sum; α = .05; h0: μ1 = μ2).
Normative valence and arousal scores for stimuli selected for each of the four experimental trial types.
Summary statistics for Identification task stimuli are as follows: Id-PS valence [mean (std. dev)] 5.04 (1.95); Id-PS arousal [mean (std. dev)] 4.95 (1.40); Id-CR valence [mean (std. dev)] 5.30 (1.95); Id-CR arousal [mean (std. dev)] 4.99 (1.51). There were no significant differences in affect properties between the Id-PS and Id-CR cue stimuli for either valence (p = .49; signed rank; α = .05; h0: μ1 = μ2) or arousal (p = .86; rank-sum; α = .05; h0: μ1 = μ2). Summary statistics for the Modulation task stimuli are as follows. Mod-PS (pos. valence cluster) valence [mean (std. dev)] 7.41 (.30); Mod-PS (neg. valence cluster) valence [mean (std. dev)] 2.08 (.36); Mod-FS (pos. valence cluster) valence [mean (std. dev)] 7.35 (0.32); Mod-FS (neg. valence cluster) valence [mean (std. dev)] 2.03 (0.41). Between the Mod-PS and Mod-FS stimuli in the positive valence cluster, there were no significant differences in valence (p = .60; rank-sum; α = .05; h0: μ1 = μ2) nor arousal (p = .25; rank-sum; α = .05; h0: μ1 = μ2). There were also no significant group differences in affect properties between the Mod-PS and Mod-FS stimuli in the negative valence cluster, either for valence (p = .74; rank-sum; α = .05; h0: μ1 = μ2) or arousal (p = .54; rank-sum; α = .05; h0: μ1 = μ2).
Data acquisition and processing
MR image acquisition
We acquired all imaging data using a Philips 3T Achieva X-series MRI scanner (Philips Healthcare, Eindhoven, The Netherlands) with a 32-channel head coil. We acquired anatomic images using an MPRAGE sequence (matrix = 256 x 256, 220 sagittal slices, TR/TE/FA = 8.0844/3.7010/8°, final resolution = 0.94 x 0.94 x 1 mm3). We acquired functional images using the following EPI sequence parameters: TR/TE/FA = 2000 ms/30 ms/90°, FOV = 240 x 240 mm, matrix = 80 x 80, 37 oblique slices, ascending sequential slice acquisition, slice thickness = 2.5 mm with 0.5 mm gap, final resolution 3.0 x 3.0 x 3.0 mm3.
Real-time MRI preprocessing and multivariate pattern classification
We implemented custom code that acquired each raw fMRI volume as it was written to disk by the MRI’s computer system (post-reconstruction). Each volume underwent a preprocessing sequence using AFNI [23] in the following order: motion correction using rigid body alignment (corrected to the first volume of Identification task Run 1), detrending (re-meaned), spatial smoothing using a 8 mm FWHM Gaussian filter, and segmentation. To construct a multivariate pattern classifier to apply to the real-time data we partitioned the Id-PS stimuli into groups of positive and negative valence (according to the middle Likert normative score) and formed time-series by convolving the hemodynamic response function with the respective stimuli’s onset times (scaling the HRF amplitude according to the absolute difference between the stimuli’s normative scores and the middle Likert score). We then thresholded these time-series to construct class labels {-1,+1} (as well as unlabeled) for each volume of the Identification task scans. We then trained a linear support vector machine [24] (SVM) to classify the valence property of each fMRI volume. Note, during the Modulation task the classification hyperplane output of the SVM was linearly detrended in real-time as follows. A hyperplane distance, h, was computed for each volume, i. For hi, i ≥ 40, the sequence of hyperplane distances h1,. . .,hi-1 was used to compute a linear trend (via the Matlab detrend function) which was subtracted from the hyperplane distance, hi. In summary, the described system achieved real-time preprocessing and generated affect state predictions for each EPI volume acquired in the Modulation task of the experiment. Total processing time of each volume was less than the TR = 2 s parameter of the EPI sequence, allowing the real-time processing to maintain a consistent (reconstruction speed determined) latency throughout real-time acquisition.
Post-hoc MRI preprocessing, multivariate pattern classification, and Platt-scaling
We used fmriprep [25] (version 20.0.0) software to conduct anatomical and functional image preprocessing and spatial normalization to the MNI152 atlas (see S1 Methods for detailed documentation of this standardized image preprocessing pipeline). We then used fmriprep’s motion parameter outputs to complete the preprocessing using AFNI, including regression of the mean time courses and temporal derivatives of the white matter (WM) and cerebrospinal fluid (CSF) masks as well as a 24-parameter motion model [26, 27], spatial smoothing (8 mm FWHM), detrending, temporal filtering (.0078 Hz high-pass), and scaling to percent signal change. For resting state functional images we took the additional step of global mean signal subtraction prior to smoothing.We then conducted high-accuracy post-hoc multivoxel pattern analysis (MVPA), i.e., neural decoding, of affect processing. We first extracted beta-series [28] neural activation maps associated with Id-PS trials from fully preprocessed fMRI data recorded during Identification task runs 1 and 2 according to well-documented methods [20]. We indexed these maps according to their corresponding stimulus, x. Therefore, the maps, β(x), were paired with their respective normative scores {β(x), v(x), a(x)} to form training data for multivoxel pattern classification implemented via linear SVM. For classification training, valence and arousal scores were each converted into positive (+1) or negative (-1) class labels according to their relation to the middle Likert score. Classification hyperplane distances were then converted to probabilities (i.e., the probability of the positive class label) via Platt-scaling [29]. These probabilities served as the affective decodings of the subjects’ brain states for further analysis.
Affect processing state encodings
In order to visualize affect processing brain states in neuroanatomical space, we performed a previously reported encoding transformation of our decoding models [21]. In short, we applied the Haufe-transform [30] to each subject’s classification hyperplane and formed a map of group-level mean encoding values for each gray matter voxel. Separately, we generated 1,000 mean encoding permutations by applying the Haufe-transform to the classification hyperplanes fit to each subject’s true beta-series and randomly permuted sets of the true affective labels. Those voxels exhibiting extreme group-level mean encoding values in comparison to the observed group-level mean permutation encoding values (2-sided test, p<0.05) were kept for visualization of the brain state. We performed this encoding process separately for each dimension of affect processing (valence and arousal).
Cued-recall, passive stimulus, and feedback-triggered stimulus modeling
We also extracted beta-series for the cue and recall steps of the Id-CR trials, the cue step of the Mod-PS trials, and the cue step of the Mod-FS trials. We then used our fit SVM models to decode the valence and arousal properties of the subjects’ brain states at these experiment steps. For the Mod-PS trials, we also constructed beta-series for the moment of trial onset as well as 2 s prior to the cue step of the Mod-FS trials–these allowed us to validate the triggers for affective stimulus test presentations as well as to measure (post-hoc) the relative change of affect processing achieved by feedback-facilitated self-induction of positive valence processing.
Surrogate cued-recall task modeling
Using previously reported methodology [31], we decoded the valence and arousal properties of each volume of Resting State fMRI data. We then uniformly randomly sampled 30 onset times for surrogate Id-CR trials and extracted the affect properties of the respective cue and recall steps of these surrogate trials to be used as within-subject controls during analysis of the actual Id-CR trials.
Psychophysiology data acquisition and preprocessing
All MRI acquisitions included concurrent psychophysiological recordings conducted using the BIOPAC MP150 Data Acquisition System and AcqKnowledge software combined with the EDA100C-MRI module (skin conductance), TSD200-MRI pulse plethysmogram (heart rate), TSD221-MRI belt (respiration), and EMG100C-MRI module (facial electromyography). In line with prior work [32, 33], we measured arousal independently based on skin conductance response (SCR) and valence based on facial electromyography (fEMG) response, specifically activity in the corrugator supercilli muscle (cEMG), which was shown in prior work to capture the full affective valence range of our affect processing induction design [22]. This work did not model the heart and respiratory rate data. We have extensively reported on our SCR electrode placement and preprocessing methods [21], and we recently reported our cEMG placement and preprocessing methods [22].
Results
Psychophysiological response validation of affect processing induction via image stimuli
We first verified the ability of the Identification task passive stimulus (Id-PS) trials to induce corollary psychophysiological responses [34] associated with affect processing in order to validate the inputs used to train our neural decoding models. We modeled the normative scores of the cue stimuli of Id-PS trials using psychophysiological response measures within a General Linear Mixed-Effects Model (GLMM) framework, respectively, for valence and arousal properties. Normative hedonic valence scores of the stimuli were modeled according to facial electromyographic responses in the corrugator supercilli as the fixed effects. Normative autonomic arousal scores to the cue stimuli were modeled according to skin conductance responses as the fixed effects. In both models, we controlled for age and sex effects. Slope and intercept random-effects were modeled subject-wise. Both validation models detected significant stimulus-related induction of the anticipated physiological responses. Moreover, our cEMG-derived model of hedonic valence (β = .11; p = .001; t-test; α = .05; h0: β = 0) was selective for the valence property of affect–a cEMG-derived model of autonomic arousal was not significant (p = .75; t-test; α = .05; h0: β = 0). Similarly, our SCR-derived model was selective for the autonomic arousal property of affect (β = .07; p = .004; t-test; α = .05; h0: β = 0)–applied to hedonic valence the SCR associations were not significant (β = .02; p = .61; t-test; α = .05; h0: β = 0). These results are consistent with the prior association of cEMG and SCR with the processing of the specific affect properties of valence and arousal, respectively, and support the induction of affect processing during the Id-PS trials.
Affect processing measurement
We next demonstrated that our prediction models accurately decoded affect processing within neural activation patterns associated with Id-PS trials, reproducing the results of earlier work using similar modeling methodology [20]. Our tabulated prediction accuracy (averaged over 39 subjects completing the experiment) over the full stimulus set (see Table 1) was highly significant for both valence (p < .001; signed rank; α = .05; h0: μ = .5) and arousal (p < .001; signed rank; α = .05; h0: μ = .5). We also observed prediction performance comparable to the best known demonstrations of neural decoding of affect processing across the valence and arousal dimensions [20, 35] when our measurements were restricted to those image stimuli exhibiting reliable brain state activations, i.e., the reliable stimulus set (see Table 1), which were determined according to previously published methods [20] that detect the degree to which brain states induced by these stimuli cluster between subjects (see S1 Methods). Indeed, using the reliable stimulus set to measure performance, we found that 34 of 39 subjects (87.2%) exhibited significant within-subject classification of affective valence and arousal stimuli, respectively (α = .05; binomial distribution, h0: p[+] = .5). These results support the validity of our neural decoding models as brain representations of affective valence and arousal.
Table 1
Multivariate neural decoding performance.
Valence
Arousal
Grp. Avg. Acc. (95% CI)
Grp. Avg. Acc. (95% CI)
Full Stimulus Set
.55 (.53,.57)
.61 (.59,.63)
Reliable Stimulus Set
.79 (.76,.82)
.75 (.72,.79)
Validation of affect decoding using novel stimuli
Prior to applying our decoding models to novel task domains, we first tested whether these models (originally fit to Id-PS features and labels) generalized to novel image stimuli. To perform this independent test we modeled, via GLMM, the normative affect scores of cue stimuli in Id-CR and Mod-PS trials. However, each test was unique.First, we modeled Id-CR task cue stimuli’s normative scores as a function of decoded affect (separately for valence and arousal) controlling for the age and sex of the subjects and modeling random effects of affect decoding subject-wise. In Id-CR trials we found that neurally decoded valence was significantly positively associated with the valence normative score (β = .30; p < .001; t-test; α = .05; h0: β = 0). Similarly, we found for Id-CR trials that neurally decoded arousal was significantly associated with the arousal normative score (β = .17; p = .001; t-test; α = .05; h0: β = 0). Age and sex effects in both cases were not significant and random effects did not significantly improve the model’s explained variance, which was very small for both valence (R2adj = .02) and arousal (R2adj = .01), respectively.Next, we modeled the Mod-PS task stimuli’s normative scores as a function of decoded affect (separately for valence and arousal normative scores). However, in this case we controlled for age and sex effects as well as the decoding of the complementary affective property in order to control for the bias of the sampling of the stimuli in this task (see Fig 3). In Mod-PS trials we found that decoded valence was significantly positively associated with the stimuli’s normative valence scores (β = .62; p < .001; t-test; α = .05; h0: β = 0). However, decoded arousal was significantly negatively associated with normative valence scores (β = -.22; p = .016; t-test; α = .05; h0: β = 0). Age and sex effects were not significant but random effects did significantly improve the model’s explained variance (R2adj = .045). In contrast, we found no significant associations between decoded arousal and the stimuli’s normative arousal scores, which confirmed that the restriction of our sampling of the Mod-PS and Mod-FS stimuli to a narrow range of normative arousal (see Fig 3) was essential as a control for this confounding variable.
Validating the rigor and reproducibility of affective brain states
In a final validation step, we sought to provide additional qualitative and quantitative evidence for the rigor and reproducibility of the affective brain states that we experimentally manipulated in this study. We computed the group-level encodings of both the arousal and valence brain states that survive permutation testing, which we present in Fig 4. Encodings of affect processing largely overlap with earlier multivariate [21] and univariate meta-analyses [36, 37] of the neural encoding of core affect processing. We took the additional step of directly comparing these encodings to affect processing encodings that were computed for past studies that incorporated similar affect induction stimuli and used similar fMRI analysis pipelines but that were derived from separate sets of research subjects (see S1 Methods). Notably, these past studies found that affect processing predictions using the machine learning models underlying these encodings were significantly more correlated to the normative scores of the induction stimuli than predictive measures derived from psychophysiological responses across the independent dimensions of affective valence (measured via heart-rate deceleration [38]) and arousal (measured via skin conductance response [21]). Indeed, we found that the neural encodings computed for this study shared 36.5% of the variance across prior whole-brain gray-matter voxel-wise encodings of valence as well as 31.1% of the variance across prior whole-brain voxel-wise encodings of arousal (see S1 Fig). Of note, the variance shared between these encodings rose to 87.0% and 85.6%, respectively for valence and arousal, when we restricted the comparison to only those voxels that survived global permutation testing (i.e., the voxels presented in Fig 4).
Fig 4
Group-level encodings of affective state processing.
Color gradations indicate the group-level t-scores of the encoding parameters (red indicating positive valence or high arousal, blue indicating negative valence or low arousal). T-scores are presented only for those voxels in which encoding parameters survived global permutation testing (p < .05). Image slices are presented in MNI coordinate space and neurological convention. Maximum voxel intensity is |t| = 6.0, i.e., color saturates for t-scores with absolute values falling above this value.
Group-level encodings of affective state processing.
Color gradations indicate the group-level t-scores of the encoding parameters (red indicating positive valence or high arousal, blue indicating negative valence or low arousal). T-scores are presented only for those voxels in which encoding parameters survived global permutation testing (p < .05). Image slices are presented in MNI coordinate space and neurological convention. Maximum voxel intensity is |t| = 6.0, i.e., color saturates for t-scores with absolute values falling above this value.
Real-time stimulus triggering
We next validated that our real-time feedback and brain-affect state triggering process functioned as designed. To test this we extracted the feedback signal calculated at the moment of stimulus trigger (including emergency triggering). The median feedback at the moment of trigger was μ = .93 (p < .001; signed rank; α = .05; h0: μ = 0). Nearly three-quarters (see Fig 5) of all trials triggered at or above the design threshold.
Fig 5
Distribution of average feedback scores at the moment of FT-PO trial stimulus trigger.
Real-time fMRI-guided self-induction of positive valence states
We next demonstrated that our primary experimental manipulation, volitionally-induced positive valence, was truly achieved at the moment of stimulus triggering. As a reminder, the Mod-FS trials were triggered using lower quality real-time affect decoding models. Here we applied post-hoc high-accuracy models to decode affect processing within the fMRI volume immediately prior to the stimulus trigger as a best possible measure of the experimental condition. However, a confounding factor of this measure is within-subject valence decoding accuracy, which we found to significantly positively associate with the magnitude of decoded valence at the moment of real-time stimulus triggering (see S2 Fig) and, therefore, could potentially act as a confound of the experimental manipulation Therefore, to test this measure we bootstrapped random variants of the trigger predictions (randomly sampling within each subject before pooling predictions to incorporate random effects) for only those subjects exhibiting within-subject significant decodings of valence processing. From these neural decodings, we found that the mean predicted valence was significantly elevated (μ = .522; p < .02; 1-sided bootstrap [n = 10000]; h0: μ < .5) at the time of triggering of the test stimuli. Independently, we confirmed that volitionally-induced positive valence states corresponded with significant changes to independent psychophysiological response measures across all subjects, including those that did not exhibit within-subject significant decodings of valence processing. In concordance with our observations of psychophysiological responses induced by extrinsic image stimuli, self-induction of positive valence induced a weak but significant positive cEMG response (β = .003; p < .01; t-test; α = .05; h0: β = 0) as well as a significant reduction in SCR (β = -.018; p < .001; t-test; α = .05; h0: β = 0).
Effect of positive valence self-induction on affect processing of subsequent stimuli
We next tested the study’s primary hypothesis–that self-induced states of positive valence bias the affect processing of subsequent image stimuli. Using a GLMM, we tested decoded valence processing of these stimuli as a function of trial type, Mod-FS (i.e., self-induced) or Mod-PS (passive), while controlling for the image stimuli’s associated normative valence and decoded arousal properties, the subject’s age and sex, as well as within-subject valence decoding accuracy. To control for potential confounding effects of the slow temporal evolution of the HRF, we also included the decoded valence of the volume immediately preceding the image stimulus (i.e., the trigger fMRI volume in Mod-FS trials or the previous fMRI volume in Mod-PS trials) as a fixed effect. Finally, we included two-way interactions between the trial type and both the normative valence score of the image stimulus and the decoded valence of the preceding fMRI volume. We modeled random intercept effects subject-wise.We found that the volitional self-induction of positive valence prior to an affective stimulus significantly positively biased the induced valence processing of the subsequent image stimulus (β = .033; p = .017; t-test; α = .05; h0: β = 0) compared with passive viewing. As would also be expected, normative valence score of the stimulus was significantly positively associated with valence processing (β = .031; p = .027; t-test; α = .05; h0: β = 0) as was the decoded valence of the previous volume (β = .803; p < .001; t-test; α = .05; h0: β = 0). For clarity, the magnitude of the effects are depicted graphically in Fig 6. Both sex (β = -.028; p = .039; t-test; α = .05; h0: β = 0) and age (β = -.043; p = .002; t-test; α = .05; h0: β = 0) were significantly negatively associated with valence processing of the subsequent image stimulus. Finally, the stimuli’s normative arousal scores were found not to be a significant predictor of decoded valence processing (β = -.028; p = .051; t-test; α = .05; h0: β = 0) nor was within-subject valence decoding model accuracy (β = .020; p = .15; t-test; α = .05; h0: β = 0). We did observe a significant interaction between self-induction trials and the decoded valence of the preceding fMRI volume (β = .117; p < .001; t-test; α = .05; h0: β = 0); however, the interaction between trial type and the normative valence score of the image stimulus was not significant (β = -.019; p = .197; t-test; α = .05; h0: β = 0). Overall model performance was R2adj = .682 and random effects did not significantly impact the model’s explained variance (p < .05; likelihood ratio test; h0: observed responses generated by fixed-effects only).
Fig 6
Effects of volitional self-induction of positive valence on affect processing bias of subsequent image stimuli.
The figure graphically depicts the effect sizes estimated for the primary experimental manipulation, i.e. the self-induction trial type (feedback-triggered stimulus, Mod-FS, versus passive stimulus, Mod-PS), denoted self-induction, on the decoded valence processing of the subsequent image stimulus while controlling for the effects of the normative valence score of the stimulus, the decoded valence processing of the previous fMRI volume, normative arousal score of the image stimulus as well as subjects’ age and sex and the two-way interactions between self-induction trial type and the normative valence score of the stimulus as well as the decoded valence of the previous fMRI volume. Statistically significant effects are colored blue (positive effects) or red (negative effects). Non-significant effects are colored gray. *The effect size of the decoded valence processing (β = .802) of the previous fMRI volume was omitted from the figure to elevate the contrast between the smaller effect sizes.
Effects of volitional self-induction of positive valence on affect processing bias of subsequent image stimuli.
The figure graphically depicts the effect sizes estimated for the primary experimental manipulation, i.e. the self-induction trial type (feedback-triggered stimulus, Mod-FS, versus passive stimulus, Mod-PS), denoted self-induction, on the decoded valence processing of the subsequent image stimulus while controlling for the effects of the normative valence score of the stimulus, the decoded valence processing of the previous fMRI volume, normative arousal score of the image stimulus as well as subjects’ age and sex and the two-way interactions between self-induction trial type and the normative valence score of the stimulus as well as the decoded valence of the previous fMRI volume. Statistically significant effects are colored blue (positive effects) or red (negative effects). Non-significant effects are colored gray. *The effect size of the decoded valence processing (β = .802) of the previous fMRI volume was omitted from the figure to elevate the contrast between the smaller effect sizes.As an independent exploration of our experimental manipulation, we repeated the study’s primary hypothesis test using psychophysiological response measures of affect processing, respectively SCR and cEMG, as the measures of interest in GLMM models while controlling for similar fixed, interaction, and random effects as were used in the neuroimaging analysis. Using these models, we found that volitionally-induced positive valence did not significantly bias cEMG responses to image-based affect induction. However, SCR response measures to subsequent image stimuli were positively associated with both the primary experimental manipulation (β = .141; p < .001; t-test; α = .05; h0: β = 0) and normative valence score of the subsequent image stimulus (β = .042; p = .046; t-test; α = .05; h0: β = 0). No other effects were significant. Overall, the model’s explained variance was R2adj = .019 and random effects did not significantly impact the model’s performance.
Measurement of unguided explicit affect regulation
We next sought to confirm affect self-induction via unguided explicit (i.e. effortful) affect regulation within the Id-CR trials. We first decoded the valence and arousal responses from acquired fMRI data for both the cue and recall steps of the Id-CR trials. We then tested for group effects of explicit affect regulation toward a known goal by modeling via GLMM, separately for valence and arousal, the neurally decoded affect processing of the four recall steps of the Id-CR trials (4 volumes, 2 seconds each) as a function of the neurally decoded affect processing associated with the cue stimuli (i.e. the affect regulation goal) as well as the control duration and the age and sex of the subject (see Fig 7). We found that the subjects significantly regulated brain representations of valence processing (β = .33; p < .001; t-test; α = .05; h0: β = 0). Random effects significantly improved the model’s effect-size (p < .05; likelihood ratio test; h0: observed responses generated by fixed-effects only) and cued-recall affect regulation effects were significantly greater than that of surrogate (control) effects (p = .001; signed rank; α = .05; h0: βIN-βRST = 0). The fixed-effect of control duration was also significant (β = .01; p < .001; t-test; α = .05; h0: β = 0) and the overall model prediction performance was good (R2adj = .10). Further, we found that subjects significantly regulated the neural correlates of arousal responses and that random effects significantly improved effect-size (β = .33; p < .05; likelihood ratio test; h0: observed responses generated by fixed-effects only); however, these cued-recall affect regulation effects were not significantly greater than that of surrogate effects (p = .10; signed rank; α = .05; h0: βIN- βRST = 0).
Fig 7
Estimation and validation of explicit intrinsic affect regulation effects within the cued-recall task.
The figure depicts the effect size of cue affect processing in explaining affect processing occurring during recall (controlling for time lag in the 4 repeated measures of recall per each measure of cue). Here affect processing measurements are Platt-scaled hyperplane distance predictions, Pr(∙), of our fitted support vector machine models. Valence and arousal dimensions of affect are predicted by separate models. The figure’s scatterplots depict the group-level effects computed using linear mixed-effects models which model random effects subject-wise. Bold red lines depict group-level fixed-effects of the cue affect. Bold gray lines depict significant subject-level effects whereas light gray lines depict subject-level effects that were not significant. The figure’s boxplots depict the group-level difference between each subject’s affect regulation measured during the cued-recall trials in comparison to surrogate affect regulation constructed from the resting state task. The bold red line depicts the group median difference in effect size between task and surrogate. The red box depicts the 25-75th percentiles of effect size difference.
Estimation and validation of explicit intrinsic affect regulation effects within the cued-recall task.
The figure depicts the effect size of cue affect processing in explaining affect processing occurring during recall (controlling for time lag in the 4 repeated measures of recall per each measure of cue). Here affect processing measurements are Platt-scaled hyperplane distance predictions, Pr(∙), of our fitted support vector machine models. Valence and arousal dimensions of affect are predicted by separate models. The figure’s scatterplots depict the group-level effects computed using linear mixed-effects models which model random effects subject-wise. Bold red lines depict group-level fixed-effects of the cue affect. Bold gray lines depict significant subject-level effects whereas light gray lines depict subject-level effects that were not significant. The figure’s boxplots depict the group-level difference between each subject’s affect regulation measured during the cued-recall trials in comparison to surrogate affect regulation constructed from the resting state task. The bold red line depicts the group median difference in effect size between task and surrogate. The red box depicts the 25-75th percentiles of effect size difference.
Unguided explicit affect regulation performance as a predictor of rtfMRI-guided self-induction
Finally, we tested whether unguided explicit affect regulation performance explained the level of rtfMRI-guided self-induced valence responses (measured immediately prior to presentation of the Mod-FS cue image). We modeled the neurally decoded valence of the final volume of the self-induce step of Mod-FS trials (see Fig 2) as a function of the individual subjects’ explicit affect regulation performance parameters (slope and intercept, respectively, for the valence and arousal properties of affect processing–see Fig 7) controlling for the subjects’ age, sex and valence decoding accuracy. We included all 2-way interactions between the slope and intercept fixed effects in this model to control for potential trade-offs that the subjects may be making during explicit regulation, e.g., focusing on only one affective property. We also included 2-way interactions of valence slope and intercept with age, sex, and valence decoding accuracy fixed effects. We found that self-induced arousal slope, i.e., the ability of the subject to accurately match the relative affective arousal of the goal, was significantly associated with rtfMRI-guided self-induced valence responses (β = .850; p = .004; t-test; α = .05; h0: β = 0). However, the total explained variance by this model was very low (R2adj = .002).
Discussion
This work made two novel contributions to our current and future understanding of the mechanisms of emotion processing and regulation. First, we found significant support for the utility of self-induced positively valent affect processing as a mechanism for positively biasing the subsequent valence processing of environmental stimuli. This finding mechanistically supports the common notion of “positive thinking” and provides insight into how and why attentional re-deployment strategies, e.g. positive distraction, may benefit those suffering from deficits of emotion regulation and dispositional negatively biased affect. Second, we demonstrated a novel application of real-time brain state decoding in which we guided subjects’ explicit emotion regulation toward a pre-defined affective goal state (positive valence) and then triggered experimental stimuli when the subjects’ affective states fell within designed criteria representing that goal state. This new technology, while still in its infancy, may provide scientists with a much needed tool for exploration of intrinsic emotion processing mechanisms and their relationships with other cognitive processes and environmental factors.The validity of our findings, as well as the efficacy of the proposed real-time affect processing decoding technology, are supported by independently measured psychophysiological responses at each stage of the experimental manipulation. Significant psychophysiological response correlates of affect processing (measured as SCR and cEMG) were observed during image-based induction of affect processing brain states (on which the neural decoding models were trained) as well as during volitional self-induction of positive valence processing. Moreover, skin conductance responses to image-based affect processing induction differed significantly between the conditions of our primary experimental manipulation: feedback-triggered (Mod-FS) versus passively triggered (Mod-PS) image stimuli. As these effects were computed from independent processes operating on unique time-scales from those of the HRF, these findings suggest that our analyses are robust to the time-scales by which canonical response functions evolve and confers support for the primary neural decoding effects that we report in Fig 6.A secondary goal of this work was to explain individual differences observed in real-time fMRI guided explicit emotion regulation toward a defined goal. Explicit affect regulation can be achieved volitionally, without the use of neurofeedback technology. Therefore, our use of real-time fMRI-based affective decodings to guide (or focus) this innate process enabled us to test (using unguided explicit affect regulation ability as a baseline) the association between innate affect regulation performance and the performance achievable using our real-time fMRI feedback approach. We observed a small but significant relationship between the ability to match one’s arousal to a pre-defined target level and the ability to self-induce positive valence via rtfMRI-guidance. These findings suggest that subjects with greater control over their state of arousal exhibit improved ability to incorporate real-time feedback. Given the well-established link between arousal and attention [39, 40], these findings may in turn reflect improved deployment of attention, either self-directed or with respect to the feedback signal, in subjects exhibiting superior rtfMRI-guided self-induced valence, which agrees with earlier work in identifying psychological predictors of BCI performance [16, 41].Our application of neural decodings (derived from normative affective scores of IAPS image stimuli) as markers of affect processing has well-known limitations, which we have noted in earlier reports[20, 21, 38]. Indeed, our validation process detected a significant negative effect of decoded arousal associated with decoded valence, suggesting that our cohort of subjects perceived the affective content of Mod-PS image stimuli differently than that which was captured by the IAPS normative scores. However, the nature of our investigation–real-time moment-to-moment affect processing, regulation, and stimulus-triggering–did not, unfortunately, permit the use of subject self-report measures of affect, thereby precluding a full concordance of our findings across cognitive, physiological, and behavioral domains. We also acknowledge technical limitations in our real-time fMRI approach. Despite significant findings of an overall effect, we believe that our implementation was suboptimal due both to response-measurement latency as well as perhaps insufficient optimization of parameters within our real-time pipeline. A limitation of real-time approaches is that parametric choices in the processing pipeline (e.g., trigger threshold) interact with experimental outcomes; therefore, it is difficult to use batch-wise optimization to inform the design criteria a priori. Moreover, our small study sample did not permit sufficient piloting of parameters prior to selecting the processing design and testing. Further, our analysis included all rtfMRI-guided self-induction trials, even those that required emergency triggering due to a failure to meet the design criteria of the goal state. This was intentional in order to put forth the most conservative, and therefore reproducible, estimate of the valence self-induction effect sizes possible using this new technological approach. Therefore, we believe the performance of the system, and its effect sizes, are understated, which suggests the potential to further refine this technology for larger-scaled deployment of brain-state driven experiment designs to test interactions between internal cognitions and external stimuli.
Conclusion
We combined established neural decoding methods with real-time fMRI to construct a dynamic experimental design in which the brain representation of a subject’s self-induced positive affect state triggered the randomized presentation of affectively congruent or incongruent image stimuli. We first validated the experiment’s ability to induce affect processing with independent measures of psychophysiology as well as the decoding models’ ability to predict affect processing in novel task domains. We then demonstrated that self-induced positive affective states positively bias the affect processing of subsequent image stimuli and thereby furnish a mechanism by which positive thinking influences how we perceive our environment.
Supplemental materials and methods.
(DOCX)Click here for additional data file.
Comparing similarity of the derived neural encodings of affective image stimuli across multiple studies.
(Top Row): Inter-study comparison that includes the encoding values of all joint GM voxels shown for (A) affective valence processing and (B) affective arousal processing. (Bottom Row): Inter-study comparison that include encoding values for only those joint GM voxels that survive global permutation significance testing (p < .05) for (C) affective valence processing and (D) affective arousal processing. Voxel-wise relationships are depicted as gray circles. The regression fit of the voxel-wise relationships is represented by the bold red line in each subplot. Total surviving joint voxels for each comparison are provided in the top left of each subplot. Inter-study shared variance is provided in the bottom right of each subplot. P-values refer to the significance of the regression fit’s linear coefficient (t-test, α = 0.05).(TIF)Click here for additional data file.
Effect of post-hoc decoding model accuracy on self-induction.
(Left) The magnitude of real-time self-induced positive affect processing (Mod-FS trials) according to the method of iteratively reweighted least squares versus. The measure of interest is Platt-scaled decoded valence observed at the moment of stimulus triggering. The fixed effect is the valence decoding model accuracy (measured according to the Full stimulus Set). Post-hoc decoding accuracy, which potentially reflects real-time decoding and, therefore, self-induction performance, was found to have a small (R2 = 0.009) but significant positive effect on the decoded valence at the moment of real-time stimulus triggering (β = .23; p = 0.014; t-test; α = .05; h0: β = 0). (Right) The effect of post-hoc decoding model accuracy on the magnitude of random affect processing occurring in the fMRI volume acquired immediately prior to passive image stimulation (Mod-PS trials). No significant effects were observed (β = .09; p = 0.099; t-test; α = .05; h0: β = 0).(TIF)Click here for additional data file.
International Affective Picture Set Image Identification Numbers.
IAPS identification numbers listed separately for Mod-PS and Mod-FS trial types.(DOCX)Click here for additional data file.
Figure datasets.
Raw data files accompanying each figure.(ZIP)Click here for additional data file.
Regression models.
Matlab formatted regression model files.(ZIP)Click here for additional data file.2 Jul 2021Submitted filename: Response_to_Editorial_Review.docxClick here for additional data file.23 Nov 2021
PONE-D-21-19149
A causal test of affect processing bias in response to affect regulationPLOS ONEDear Dr. Bush,Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.The critiques from the 1st reviewer appear to be highly addressable whereas the issues raised in the 2nd critique are potentially more serious red flags.Please submit your revised manuscript by Jan 07 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.Please include the following items when submitting your revised manuscript:A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.We look forward to receiving your revised manuscript.Kind regards,Desmond J. OathesAcademic EditorPLOS ONEJournal Requirements:When submitting your revision, we need you to address these additional requirements.1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found athttps://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf andhttps://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf2. We note that Figure 2 in your submission contain copyrighted images. All PLOS content is published under the Creative Commons Attribution License (CC BY 4.0), which means that the manuscript, images, and Supporting Information files will be freely available online, and any third party is permitted to access, download, copy, distribute, and use these materials in any way, even commercially, with proper attribution. For more information, see our copyright guidelines: http://journals.plos.org/plosone/s/licenses-and-copyright.We require you to either (1) present written permission from the copyright holder to publish these figures specifically under the CC BY 4.0 license, or (2) remove the figures from your submission:a. You may seek permission from the original copyright holder of Figure 2 to publish the content specifically under the CC BY 4.0 license.We recommend that you contact the original copyright holder with the Content Permission Form (http://journals.plos.org/plosone/s/file?id=7c09/content-permission-form.pdf) and the following text:“I request permission for the open-access journal PLOS ONE to publish XXX under the Creative Commons Attribution License (CCAL) CC BY 4.0 (http://creativecommons.org/licenses/by/4.0/). Please be aware that this license allows unrestricted use and distribution, even commercially, by third parties. Please reply and provide explicit written permission to publish XXX under a CC BY license and complete the attached form.”Please upload the completed Content Permission Form or other proof of granted permissions as an "Other" file with your submission.In the figure caption of the copyrighted figure, please include the following text: “Reprinted from [ref] under a CC BY license, with permission from [name of publisher], original copyright [original copyright year].”b. If you are unable to obtain permission from the original copyright holder to publish these figures under the CC BY 4.0 license or if the copyright holder’s requirements are incompatible with the CC BY 4.0 license, please either i) remove the figure or ii) supply a replacement figure that complies with the CC BY 4.0 license. Please check copyright information on all replacement figures and update the figure caption with source information. If applicable, please specify in the figure caption text when a figure is similar but not identical to the original image and is therefore for illustrative purposes only.[Note: HTML markup is below. Please do not edit.]Reviewers' comments:Reviewer's Responses to Questions
Comments to the Author1. Is the manuscript technically sound, and do the data support the conclusions?The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: PartlyReviewer #2: Yes********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: YesReviewer #2: Yes********** 3. Have the authors made all data underlying the findings in their manuscript fully available?The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: YesReviewer #2: Yes********** 4. Is the manuscript presented in an intelligible fashion and written in standard English?PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: YesReviewer #2: No********** 5. Review Comments to the AuthorPlease use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: This is a technically impressive investigation of the potential for fMRI to decode affective states from pictorial images and to utilize this information to track and trigger subsequent presentation of affective stimuli.I have several major comments and then some additional areas for clarification:1). One conceptual issue I note as problematic in the presentation and framing of these results is that the design and execution of this task somehow replicates or captures key ingredients of CBT. With some notable exceptions (e.g. reminiscence therapy), CBT does not require or engage the participant in deliberate up-regulation of positive emotion via internally-generated strategies, but it instead focuses on realistic appraisal of thoughts and situations with the typically focus on removing or altering distorted interpretations that tend to support and generate negative affect. This is the typical form of cognitive reappraisal as practiced in numerous CBT treatments, and it is not about “positive thinking” but “realistic thinking.” The approach utilized here is qualitatively different, and I think this needs to be better explained in the Introduction and Discussion.2). The use of the word “causal” in the title and throughout the manuscript is too provocative and controversial a term and does not accurately reflect the approach utilized in this study. There are numerous arguments about how causality can be inferred and what constitutes causal inference, but I believe the authors are going too far with the use of this term given the experimental design utilized here. This term should be removed in favor of more accurate descriptions such as “experimental manipulation.”3). Regarding the claim that self-induced positive affective states positively bias the affect processing of subsequent image stimuli, I think there should be more consideration and nuance with this interpretation. First, the coarse time resolution of the fMRI protocol utilized here (and the slow hemodynamic response function more generally) doesn’t allow for a clean separation of carryover effects form the positive affect up-regulation period to the processing of the triggered stimulus cue, and it’s not clear how these two periods interact in the resultant brain activation data. For example, it’s possible that participants may have continued to engage in the positive affect up-regulation strategy throughout the presentation of the subsequent stimulus cue (and thus devoted minimal attention to actually processing the cue), but unfortunately there is no way to discern the degree to which the subsequent cue was processed and the prior affective up-regulation strategy ceased. Given that EMG and SCR data were presumably collected during this run as well, I wonder if the authors can also test to see if EMG and SCR data support the idea that affective up-regulation has a distinct psychophysiological readout and whether EMG and SCR data following Mod-CR trials vs. Mod-PS trials show differences as a function of affective up-regulation?Areas for clarification-During the Identification Task, were all IAPS images positive in valence? What were the criteria utilized to select these images?-It’s also unclear what portions of this task were utilized to train the MVPA decoder. Was it the brain activity during passive viewing of the images, or that during the active recall period, or both? Moreover, how did the authors verify that these images actually induced a state of positive emotion in the participants?-The scaling of the HRF function in training the MVPA decoder by the valence score rating difference from the mean Likert rating assumes, to some extent, that each individual perceived the image to be as positive or negative as the normed rating provided in the IAPS images. This seems to be a strong assumption given that there is probably a great deal of individual variability in subjective responses to these images. Why was a uniform weighting of the HRF not employed? How much variability was there in the weighting of the HRF within a particular class (e.g., positive and negative)?-The average prediction accuracy for valence for the full stimulus set, though statistically significantly different from 0.5, doesn’t seem very high (55%). This is concerning given this decoder was utilized to trigger stimulus presentation in the subsequent run, correct?-The authors also should describe briefly how the “reliable” stimulus set was arrived at, even though it is published elsewhere.-Regarding the test of positive affect up-regulation on subsequent neural processing of affective images, did the authors test for an interaction effect by valence of the subsequent affective image? Some were positive and some were negative, correct? One might suspect that the effect of positive affect self-induction on the subsequent affective processing of the triggered image would vary as a function of the image valence.Reviewer #2: In this manuscript, the author applied a novel experimental design to test the emotional processing bias of affect state towards subsequent stimulus. A machine learning model based neural feedback is involved to establish prior positive affective state. Overall, I think the research question is interesting and important to the field and the experimental method is novel. However, I have some doubts on neural feedback part as well as some comments on the main results presentation.Neural feedback: The effectiveness of point-to-hyperplane distance depends on the performance of the decoding model. In other word, if the decoding model based on individual subject does not perform well, the hyperplane could be randomly generated. Point-to-hyperplane distance is meaningless in this scenario. Table 1 had reported average performance in group level but not in individual level. I wonder if there are any individuals have low decoding performances on affective processing (e.g. not significant for within-subject classification)? If so, what strategy should be applied to them? If not, is there any subject-level correlation between decoding performance and regulation ability? Such correlation could imply the robustness of predefined decoding model is one factor for valence-regulation in current neurofeedback setting.During reading, I found it was hard to catch corresponding result that demonstrates the main claim: affective state bias emotion processing of subsequent stimulus. None of the main figure presents this effect. Although this point is being raised in result section by showing significant beta and p value for GLMM model, a better explanation of GLMM result is needed. Adding any equations or figures to describe this model will be helpful for readers to better understand this result (e.g adding a figure between figure 5 and 6, or adding a new panel to figure 5 to illustrate how the data fits in model and model performance). Also, is there any analysis done to directly compare neural-feedback guided and unguided explicit affect regulation in order to prove the necessity of neural-feedback?Some minor issues:Please provide the full name of GLMM. I assume it’s Generalized linear mixed models, but I didn’t find any place to clarify thatIn figure 2 bottom diagram, it might be better to add another line to illustrate the scenario that the real-time valence estimate failed to reach initial threshold (0.8) but reach the reduced threshold.********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.If you choose “no”, your identity will remain anonymous but your review may still be made public.Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: NoReviewer #2: No[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.28 Jan 2022RESPONSE TO REVIEWERSWe appreciate the reviewers’ thoughtful comments on our prior submission. In response we have added new data analyses and clarifications that have resulted in a much improved manuscript. We first describe author initiated changes that we made to the manuscript, which we deemed necessary based on detection of small omissions or errors in our prior submission. We then reiterate each reviewer’s comment or concern below and describe our specific response.Author Initiated Changes:1. We altered Fig 2 to include example image stimuli for which the authors own the copyrights.2. We incorrectly reported the template space of Fig 4 as Talairach space. We have corrected this to report the space as MNI.3. We altered Fig 7 (previously Fig 6) to plot the axes overtop of the individual data markers. In the original version of the figure the data markers occluded parts of the x- and y-axes.4. We replaced references to the F-test with t-test throughout the manuscript. We originally reported the statistical tests for Matlab’s lme function as F-tests. This is incorrect. Matlab’s lme function uses the t-test to calculate significance of fixed effects. However, the p-values and effect sizes reported in the previous manuscript were correct.5. We modified the description of our use of fmriprep software in the Main Manuscript and modified the S1 Methods to include fmriprep’s detailed description of the processing pipeline, which the authors of the tool request be included verbatim in all publications (the text is released under the [CC0](https://creativecommons.org/publicdomain/zero/1.0/) license.Responses to Reviewer #1:This is a technically impressive investigation of the potential for fMRI to decode affective states from pictorial images and to utilize this information to track and trigger subsequent presentation of affective stimuli.I have several major comments and then some additional areas for clarification:1). One conceptual issue I note as problematic in the presentation and framing of these results is that the design and execution of this task somehow replicates or captures key ingredients of CBT. With some notable exceptions (e.g. reminiscence therapy), CBT does not require or engage the participant in deliberate up-regulation of positive emotion via internally-generated strategies, but it instead focuses on realistic appraisal of thoughts and situations with the typically focus on removing or altering distorted interpretations that tend to support and generate negative affect. This is the typical form of cognitive reappraisal as practiced in numerous CBT treatments, and it is not about “positive thinking” but “realistic thinking.” The approach utilized here is qualitatively different, and I think this needs to be better explained in the Introduction and Discussion.In response, we have removed references to CBT throughout the manuscript (Introduction paragraph: lines 51-59, 68-76 and Discussion paragraph: lines 601-613). We have recast the contributions of this work in terms of attentional deployment strategies as conceptualized by the Process Model of emotion regulation.2). The use of the word “causal” in the title and throughout the manuscript is too provocative and controversial a term and does not accurately reflect the approach utilized in this study. There are numerous arguments about how causality can be inferred and what constitutes causal inference, but I believe the authors are going too far with the use of this term given the experimental design utilized here. This term should be removed in favor of more accurate descriptions such as “experimental manipulation.”In response, we have removed the word “causal” from the manuscript, S1 Methods, and Open Science Framework repository.3). Regarding the claim that self-induced positive affective states positively bias the affect processing of subsequent image stimuli, I think there should be more consideration and nuance with this interpretation. First, the coarse time resolution of the fMRI protocol utilized here (and the slow hemodynamic response function more generally) doesn’t allow for a clean separation of carryover effects form the positive affect up-regulation period to the processing of the triggered stimulus cue, and it’s not clear how these two periods interact in the resultant brain activation data. For example, it’s possible that participants may have continued to engage in the positive affect up-regulation strategy throughout the presentation of the subsequent stimulus cue (and thus devoted minimal attention to actually processing the cue), but unfortunately there is no way to discern the degree to which the subsequent cue was processed and the prior affective up-regulation strategy ceased. Given that EMG and SCR data were presumably collected during this run as well, I wonder if the authors can also test to see if EMG and SCR data support the idea that affective up-regulation has a distinct psychophysiological readout and whether EMG and SCR data following Mod-CR trials vs. Mod-PS trials show differences as a function of affective up-regulation?We appreciate this comment and recognize this as a critical question of the approach. In response we want to clarify what we think was being conveyed in the comment above. We believe that the reviewer requests an independent test, using psychophysiology, of the two separate parts of the Mod-FS trials. The reviewer wants a psychophysiology test of the volitional self-induction step of the trial (Fig 2, Mod-FS: self-induce) as well as a psychophysiological test of the main experimental manipulation, i.e., differential psychophysiological responses to image-based induction of the feedback-triggered stimuli versus passive stimuli.A quick note on our reasoning for where and when we used psychophysiology to independently validate neuroimaging findings in this experiment. Our group has published multiple papers on the relative sensitivity of neural decodings versus psychophysiological responses (SCR, facial EMG, and HR) in detecting affect processing. We have found, repeatedly, that neural decodings are approximately 3 times more sensitive than SCR in detecting arousal[1] and approximately 10 times more sensitive than fEMG or HR deceleration in detecting valence[2,3]. In this manuscript, we used psychophysiology to independently validate the induction of affect processing brain states by image stimuli during the Id-PS trials. We have reproduced these effects multiple times which allows us to know that these manipulations can be detected by psychophysiological responses and act as independent validation of the training dataset on which our neural decodings were constructed. This is a recommended step for validating affect induction within neuroimaging experiments[4]. However, we understand slow temporal evolution of the HRF raises skepticism of any claim relying on relatively fast fMRI dynamics.Therefore, in response to this comment, we conducted these requested independent tests. First, using a GLMM we detected significant responses (both SCR and cEMG) to volitional self-induction of valence. This finding is now reported in the Results (lines 484-490). Second, using a GLMM (with similar fixed and random effects as those used for the neural decodings) we detected that the experimental manipulation has a differential effect on SCR but not cEMG. We report these findings in the Results (lines 536-546) and describe our interpretation of these findings in the Discussion (lines 623-626).An additional step we took to remove concerns surrounding HRF dynamics influencing the experiment was to include the decoding of the trigger (or the previous EPI volume in passive trials) in the GLMM of our main experimental manipulation. Controlling for the potential entrainment of affect processing signal, we still see a robust effect of the primary experimental manipulation (see Fig 5 and lines 495-522).Areas for clarification-During the Identification Task, were all IAPS images positive in valence? What were the criteria utilized to select these images?The answer is, no, neural decoding models were trained on affect processing brain states induced by image stimuli from Id-PS trials which were drawn from a broad range of valence and arousal scores. The specific valence and arousal scores of all stimuli are depicted in Fig 3 (Id-PS stimuli are the gray markers). In response, we have highlighted this fact in the manuscript’s Methods (lines 234-235), which describes the maximal spanning of the arousal-valence plane (more, precisely, a maximal subspace span[1]).-It’s also unclear what portions of this task were utilized to train the MVPA decoder. Was it the brain activity during passive viewing of the images, or that during the active recall period, or both? Moreover, how did the authors verify that these images actually induced a state of positive emotion in the participants?Similar to the comment above, the neural decoding models were trained within-subject using the brain states induced by the Id-PS trials. We independently validated that affect processing induction occurs via psychophysiological responses for the independent affective dimensions (SCR for arousal and cEMG for valence). This is the process recommended by Heller et al.[4] for validating affect processing induction in neuroimaging experiments. We describe this validation of affect induction in Section: Psychophysiological Response Validation of Affect Processing Induction via Image Stimuli (lines 363-383).-The scaling of the HRF function in training the MVPA decoder by the valence score rating difference from the mean Likert rating assumes, to some extent, that each individual perceived the image to be as positive or negative as the normed rating provided in the IAPS images. This seems to be a strong assumption given that there is probably a great deal of individual variability in subjective responses to these images. Why was a uniform weighting of the HRF not employed? How much variability was there in the weighting of the HRF within a particular class (e.g., positive and negative)?We acknowledge that there exists individual variability in affect processing responses to the IAPS stimuli. However, there is strong evidence in the literature that, for classification purposes, the assumption that individuals respond congruently to the normative scores (when discretized according to the middle Likert score). These effects have been reproduced by multiple labs for both within-subject and between-subject neural decoding experiments[1,2,5-7]. Our group also has published a study comparing neural decoding performance when the MVPA is trained with normative scores versus self-reported scores - no significant differences were found in classification performance8.With respect to the variable weighting, this technique is used to simulate activation-label relationships of the beta-series within the real-time decoding process. We cannot truly construct beta-series in real-time due the long tail of the HRF existing in the future (i.e., the volumes necessary for the regression haven’t been acquired yet). Rather, real-time MVPA decodes each volume individually as the arrive from the reconstruction computer. In training the real-time decoder, however, we can shape the labels of the training set to stimulate the fluctuations that would be observed and to approximate some of the benefits of beta-series in real-time. These variable weightings are specific to the low-quality real-time decoding models and were not used in the high-quality post-hoc decoding models.-The average prediction accuracy for valence for the full stimulus set, though statistically significantly different from 0.5, doesn’t seem very high (55%). This is concerning given this decoder was utilized to trigger stimulus presentation in the subsequent run, correct?Decoding performance is largely driven by the difficulty of the underlying problem. Neural decoding models of affect and emotion processing are built from brain states that are induced by stimuli. However, experimenters control the distribution of the properties of these stimuli. Past decoding modeling efforts[6,7] used hand-chosen image stimuli that clustered into extremes of affective and emotional experience, thereby rendering the underlying decoding problem easier to classify, resulting in high classification accuracy. However, these stimulus sets do not generalize to ecological affective/emotional experience. Our group has published multiple papers[2,5,8] that have explored decoding of computationally sampled stimuli that reflect the maximum range of affective experiences that can be induced by the IAPS image set (see Fig 3, Id-PS stimuli). We have also devised and validated an algorithm for identifying stimuli that reliably induce similar affective brain states across subjects, which we term the Reliable Stimulus Set (RSS)[5]. We have shown that RSS stimuli cluster at the extremes of perceived affective experience (both valence and arousal) and resemble the hand-chosen stimulus sets used to report affect/emotion decoding performance in the literature. Decoding models achieve very high classification performance on RSS stimuli compared to stimuli drawn from the full affective range of experience within IAPS. In fact, our decoding approach achieves accuracies of .75-.79 on RSS, which is similar to the best reported classification performance in the literature[5,6]. In this context, the statistically significant performance (accuracies of .55-.61) we report for the full stimulus set, a much more challenging and ecologically relevant stimulus set, is strong.We feel it is important to report decoding performance on the full stimulus set, which represents the expected out-of-sample real-world performance of the model, as well as decoding performance that matches results found in the literature, which we represent as the performance decoding the RSS (a subset of the full stimulus set). However, as we have reported on these findings multiple times in the past (including independent psychophysiological confirmation of these effects)[1,2,5] we do not feel these results are novel and should not be included in the main manuscript as primary findings. Therefore, we have reported extensive details on our stimulus selection approach, the role of the RSS, and the validation of our decoding methods in the (supplemental) S1 Methods. We have revised our summary of these findings in the Results section of the main manuscript to reflect this emphasis.-The authors also should describe briefly how the “reliable” stimulus set was arrived at, even though it is published elsewhere.We include a response to this comment as part of the response to the comment above.-Regarding the test of positive affect up-regulation on subsequent neural processing of affective images, did the authors test for an interaction effect by valence of the subsequent affective image? Some were positive and some were negative, correct? One might suspect that the effect of positive affect self-induction on the subsequent affective processing of the triggered image would vary as a function of the image valence.This is an excellent point, which we had not originally considered. In response, we included a fixed effect for the interaction between the main experimental manipulation (feedback triggered stimulus vs passive) and normative valence score of the subsequent stimulus, which was not significant (see Fig 6). We updated our manuscript to reflect this finding (line 494 to line 521). We included similar interaction terms in our psychophysiological confirmation of the effects of the main experimental manipulation (see line 536 to line 546).Responses to Reviewer #2:In this manuscript, the author applied a novel experimental design to test the emotional processing bias of affect state towards subsequent stimulus. A machine learning model based neural feedback is involved to establish prior positive affective state. Overall, I think the research question is interesting and important to the field and the experimental method is novel. However, I have some doubts on neural feedback part as well as some comments on the main results presentation.Neural feedback: The effectiveness of point-to-hyperplane distance depends on the performance of the decoding model. In other word, if the decoding model based on individual subject does not perform well, the hyperplane could be randomly generated. Point-to-hyperplane distance is meaningless in this scenario. Table 1 had reported average performance in group level but not in individual level. I wonder if there are any individuals have low decoding performances on affective processing (e.g. not significant for within-subject classification)? If so, what strategy should be applied to them? If not, is there any subject-level correlation between decoding performance and regulation ability? Such correlation could imply the robustness of predefined decoding model is one factor for valence-regulation in current neurofeedback setting.In response, we conducted this analysis and detected a small but significant effect (see S2 Fig) of model performance positively associating with the magnitude of valence at triggering. This effect matches the concern described above. Therefore, we recomputed the GLMM that modeled our primary experimental manipulation controlling for within-subject model accuracy as a fixed-effect. Here, the effect was not significant. We have revised the manuscript to reflect these changes (line 494 to line 521).Also in response to this comment we reported the number of subjects who achieve within-subject significant decoding performance (34/39). We have revised the manuscript to report these findings (line 386 to line 400).During reading, I found it was hard to catch corresponding result that demonstrates the main claim: affective state bias emotion processing of subsequent stimulus. None of the main figure presents this effect. Although this point is being raised in result section by showing significant beta and p value for GLMM model, a better explanation of GLMM result is needed. Adding any equations or figures to describe this model will be helpful for readers to better understand this result (e.g adding a figure between figure 5 and 6, or adding a new panel to figure 5 to illustrate how the data fits in model and model performance). Also, is there any analysis done to directly compare neural-feedback guided and unguided explicit affect regulation in order to prove the necessity of neural-feedback?In response to the first point, we constructed a new figure (see Fig 6) that summarizes these effects visually.In response to the second point, no, comparing real-time guided to unguided self-induction was not part of the experiment design. This is a relatively small-scale neuroimaging study (funded by a NARSAD YIA award with a small budget). The intent was to maximize the effect size of the experimental manipulation using as few subjects as possible. This is an interesting question - whether the feedback or the dynamic trigger is the most critical element of the technology. However, that question is beyond the scope of this work.Some minor issues:Please provide the full name of GLMM. I assume it’s Generalized linear mixed models, but I didn’t find any place to clarify thatIn response, we have modified the manuscript to provide the full name of the Generalized linear mixed-effects model (GLMM) at first usage (line 369) in the manuscript before switching to the acronym for the remainder of the text.In figure 2 bottom diagram, it might be better to add another line to illustrate the scenario that the real-time valence estimate failed to reach initial threshold (0.8) but reach the reduced threshold.In response, we have elaborated the caption of Figure 2 to more clearly describe the dashed line representing the Trigger threshold as a boundary that evolves through time. We chose this solution in order to maintain the one-to-one relationship between the elements of the trial (denoted by the image blocks) and the diagram of the real-time triggering system. A second line would not allow for this one-to-one correspondence.References1. Bush, K. A., Privratsky, A., Gardner, J., Zielinski, M. J. & Kilts, C. D. Common Functional Brain States Encode both Perceived Emotion and the Psychophysiological Response to Affective Stimuli. Scientific Reports 8, (2018).2. Wilson, K. A., James, G. A., Kilts, C. D. & Bush, K. A. Combining Physiological and Neuroimaging Measures to Predict Affect Processing Induced by Affectively Valent Image Stimuli. Sci Rep 10, 9298 (2020).3. Bush, K. A., James, G. A., Privratsky, A. A., Fialkowski, K. P. & Kilts, C. D. An action-value model explains the role of the dorsal anterior cingulate cortex in performance monitoring during affect regulation. bioRxiv 23 (2020) doi:10.1101/2020.09.08.283671.4. Heller, A. S., Greischar, L. L., Honor, A., Anderle, M. J. & Davidson, R. J. Simultaneous acquisition of corrugator electromyography and functional magnetic resonance imaging: A new method for objectively measuring affect and neural activity concurrently. NeuroImage 58, 930–934 (2011).5. Bush, K. A. et al. Brain States That Encode Perceived Emotion Are Reproducible but Their Classification Accuracy Is Stimulus-Dependent. Frontiers in Human Neuroscience 12, (2018).6. Baucom, L. B., Wedell, D. H., Wang, J., Blitzer, D. N. & Shinkareva, S. V. Decoding the neural representation of affective states. NeuroImage 59, 718–727 (2012).7. Chang, L. J., Gianaros, P. J., Manuck, S. B., Krishnan, A. & Wager, T. D. A Sensitive and Specific Neural Signature for Picture-Induced Negative Affect. PLOS Biology 13, e1002180 (2015).8. Bush, K. A., Inman, C. S., Hamann, S., Kilts, C. D. & James, G. A. Distributed Neural Processing Predictors of Multi-dimensional Properties of Affect. Frontiers in Human Neuroscience 11, (2017).Submitted filename: Response_To_Reviewers.docxClick here for additional data file.17 Feb 2022A test of affect processing bias in response to affect regulationPONE-D-21-19149R1Dear Dr. Bush,We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.Kind regards,Desmond J. OathesAcademic EditorPLOS ONEAdditional Editor Comments (optional):Reviewers' comments:Reviewer's Responses to Questions
Comments to the Author1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressedReviewer #2: All comments have been addressed********** 2. Is the manuscript technically sound, and do the data support the conclusions?The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: YesReviewer #2: Yes********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: YesReviewer #2: Yes********** 4. Have the authors made all data underlying the findings in their manuscript fully available?The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: YesReviewer #2: Yes********** 5. Is the manuscript presented in an intelligible fashion and written in standard English?PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: YesReviewer #2: Yes********** 6. Review Comments to the AuthorPlease use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: (No Response)Reviewer #2: Glad to see the concern regarding within-participants decoder performance is addressed. According to this data, it seems that there is correlation between decoder performance and magnitude of valence at triggering. but it does not influence the main effect of this experiment. This could be an important note to help researchers who plan to follow this novel paradigm.The new figure 6 is informative and presents the main results well. I think the figures and analysis are in good shape and fit well with main claim. I have no further concern regarding this manuscript.********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.If you choose “no”, your identity will remain anonymous but your review may still be made public.Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: NoReviewer #2: Yes: Ke Bo21 Feb 2022PONE-D-21-19149R1A test of affect processing bias in response to affect regulationDear Dr. Bush:I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.If we can help with anything else, please email us at plosone@plos.org.Thank you for submitting your work to PLOS ONE and supporting open access.Kind regards,PLOS ONE Editorial Office Staffon behalf ofDr. Desmond J. OathesAcademic EditorPLOS ONE
Authors: Jonathan D Power; Anish Mitra; Timothy O Laumann; Abraham Z Snyder; Bradley L Schlaggar; Steven E Petersen Journal: Neuroimage Date: 2013-08-29 Impact factor: 6.556
Authors: Tobias Flaisch; Markus Junghöfer; Margaret M Bradley; Harald T Schupp; Peter J Lang Journal: Psychophysiology Date: 2007-10-01 Impact factor: 4.016
Authors: Kristen A Lindquist; Tor D Wager; Hedy Kober; Eliza Bliss-Moreau; Lisa Feldman Barrett Journal: Behav Brain Sci Date: 2012-06 Impact factor: 12.579
Authors: Stefan Haufe; Frank Meinecke; Kai Görgen; Sven Dähne; John-Dylan Haynes; Benjamin Blankertz; Felix Bießmann Journal: Neuroimage Date: 2013-11-15 Impact factor: 6.556
Authors: Jonathan D Power; Kelly A Barnes; Abraham Z Snyder; Bradley L Schlaggar; Steven E Petersen Journal: Neuroimage Date: 2011-10-14 Impact factor: 6.556
Authors: Aaron S Heller; Lawrence L Greischar; Ann Honor; Michael J Anderle; Richard J Davidson Journal: Neuroimage Date: 2011-06-30 Impact factor: 6.556
Authors: Keith A Bush; Anthony Privratsky; Jonathan Gardner; Melissa J Zielinski; Clinton D Kilts Journal: Sci Rep Date: 2018-10-18 Impact factor: 4.379