| Literature DB >> 31446554 |
Orlando Fernandes1, Liana Catrina Lima Portugal2, Rita de Cássia S Alves2,3, Tiago Arruda-Sanchez4, Eliane Volchan5, Mirtes Garcia Pereira2, Janaina Mourão-Miranda6,7, Letícia Oliveira2.
Abstract
Whether subtle differences in the emotional context during threat perception can be detected by multi-voxel pattern analysis (MVPA) remains a topic of debate. To investigate this question, we compared the ability of pattern recognition analysis to discriminate between patterns of brain activity to a threatening versus a physically paired neutral stimulus in two different emotional contexts (the stimulus being directed towards or away from the viewer). The directionality of the stimuli is known to be an important factor in activating different defensive responses. Using multiple kernel learning (MKL) classification models, we accurately discriminated patterns of brain activation to threat versus neutral stimuli in the directed towards context but not during the directed away context. Furthermore, we investigated whether it was possible to decode an individual's subjective threat perception from patterns of whole-brain activity to threatening stimuli in the different emotional contexts using MKL regression models. Interestingly, we were able to accurately predict the subjective threat perception index from the pattern of brain activation to threat only during the directed away context. These results show that subtle differences in the emotional context during threat perception can be detected by MVPA. In the directed towards context, the threat perception was more intense, potentially producing more homogeneous patterns of brain activation across individuals. In the directed away context, the threat perception was relatively less intense and more variable across individuals, enabling the regression model to successfully capture the individual differences and predict the subjective threat perception.Entities:
Keywords: Decoding; MVPA; Perception; Threat; fMRI
Mesh:
Year: 2020 PMID: 31446554 PMCID: PMC7648008 DOI: 10.1007/s11682-019-00177-6
Source DB: PubMed Journal: Brain Imaging Behav ISSN: 1931-7557 Impact factor: 3.978
Stimuli evaluation report: Threat and neutral stimulus ratings for valence, arousal, complexity, and physical characteristics of the pictures (brightness, contrast, and spatial frequency) in the directed towards and directed away contexts. The threat perception index to threat stimuli are presented for the directed towards and directed away contexts
| Directed towards context | Directed away context | |||
|---|---|---|---|---|
| Threat | Neutral | Threat | Neutral | |
| 12.38 (7.47) | – | 10.29 (7.94) | – | |
| 2.06 (1.22) | 5.20 (0.99) | 2.73 (1.37) | 5.47 (1.01) | |
| 6.56 (2.20) | 3.65 (2.02) | 5.50 (2.16) | 3.63 (1.99) | |
| 3.00 (0.74) | 2.92 (0.57) | 2.59 (0.77) | 3.02 (1.04) | |
| 76.47 (23.75) | 79.33 (24.98) | 92.84 (36.86) | 87.07 (37.10) | |
| 25.43 (9.73) | 27.67 (8.78) | 20.95 (10.57) | 21.42 (7.37) | |
| 0.96 (0.10) | 0.99 (0.14) | 0.99 (0.06) | 1.02 (0.11) | |
Standard deviations are shown within parentheses. Brightness, contrast and spatial frequency were measured according to Bradley and colleagues (Bradley et al. 2007). Brightness was defined as the mean RGB (red, green and blue) value for each pixel, averaged across all pixels for each picture. Contrast was defined as the standard deviation of the mean RGB values computed across pixels for each column. Spatial frequency was defined as the median fast Fourier transform (FFT) power, which was computed for each row and column and then averaged. The valence, arousal, complexity ratings, brightness, contrast and spatial frequency values have been already published for the threat and neutral stimulus in a previous study of our group (Fernandes et al. 2017).
Fig. 1Experimental Design: (a) Experimental paradigm – Each trial began with the presentation of a photograph and a fixation spot. After 3 s, a square appeared around the fixation spot 700–1200 ms prior to target onset. The target was a small annulus that appeared around the fixation spot. The total duration of the trial was 5 s. Each block consisted of three photographs of the same category. The fixation cross on a black background remained for 12 s. The experimental session consisted of 56 blocks (14 blocks for each category), pseudo randomized through the experiment and divided into 4 runs. (b) MKL classification models - The first model was trained to discriminate between the patterns of brain activity to threat versus neutral stimuli in the directed towards context. The second model was trained to discriminate between the patterns of brain activity to threat versus neutral in the directed away context. (c) MKL regression models - Two regression models were trained with the goal of predicting the subjects’ threat perception indices. The first model was based on the patterns of brain activation to threat stimuli directed towards the viewer, and the second model was based on patterns of brain activation to threat stimuli directed away from the viewer
Fig. 2Multiple Kernel Learning Framework. MKL Classification Model (left panel): (a) The training data for the multiple kernel learning (MKL) classification model consists of examples that pair a contrast image from the GLM model and the corresponding label of the experimental conditions (threat or neutral). (b) The MKL framework uses a predefined anatomical template to segment the contrast images into 120 anatomical brain regions. (c) During the training the MKL model simultaneously learns the contribution of each region (kernel/regions weights) and within each region the contribution of each voxel (voxel weights) for the predictive function, respectively. (d) During the test phase, given the contrast image of a test subject the MKL model predicts its corresponding experimental condition (threat or neutral). (e) The classification model performance is evaluated using accuracy and ROC curve (see methods and results). MKL Regression Model (right Panel): (a) The training data for the multiple kernel learning (MKL) regression model consists of examples that pair a contrast image from the GLM model and the corresponding threat perception index. (b) The MKL framework uses a predefined anatomical template to segment the contrast images into 120 anatomical brain regions. (c) During the training the MKL model simultaneously learns the contribution of each region (kernel/regions weights) and within each region the contribution of each voxel (voxel weights) for the predictive function, respectively. (d) During the test phase, given the contrast image of a test subject the MKL model predicts its corresponding threat perception index. (e) The model performance is evaluated using three metrics that measure the agreement between the predicted and the actual threat perception indices: Pearson’s correlation coefficient (r), coefficient of determination (r2) and mean squared error (MSE)
MKL classification model performance
| Models | Cross-validation procedure | Balance accuracy | Class 1 (threat) | Class 2 (neutral) | ROC/AUC |
|---|---|---|---|---|---|
| 78.95 ( | 78.95 (p = 0.01) | 78.95 (p = 0.01) | 0.82 | ||
| 72.37 (p = 0.01) | 63.16 ( | 81.58 (p = 0.01) | 0.78 | ||
| 47.37 ( | 47.37 ( | 47.37 ( | 0.49 | ||
| 60.53 (p = 0.08) | 52.63 ( | 68.42 ( | 0.55 |
p-values were obtained by permutation test (100 permutations). LOSO = leave-one-subject-out procedure; “10-fold” = 10-fold cross-validation procedure
Fig. 3MKL classification model based on the 10-fold cross-validation scheme: (a) Whole brain map showing the kernel weights per region; the color bar represents the full range of kernel weights. (b) Images showing the voxels weights within the regions with highest contributions to the MKL classification model in sagittal or axial plane slices (“x” or “z” MNI coordinates, respectively). The top 10 regions ranked by the MKL classification model as relevant for discriminating between patterns of brain activity to threat versus neutral stimuli in the directed towards context are shown; the regions’ weights (in percentage) are shown in parentheses. The color bars represents the full range of voxel weights within each region. The red circle highlights small region. Acronyms: IFG – inferior frontal gyrus; EBA – extrastriate body area; PFC – prefrontal cortex; OFC – orbitofrontal cortex
MKL regression model performance
| Models | Cross-validation procedure | r | R2 | MSE |
|---|---|---|---|---|
| Threat directed towards regression | 0.48 (p = 0.03) | 0.23 ( | 42.94 (p = 0.03) | |
0.27 (p = 0.08) | 0.07 ( | 54.81 ( | ||
| Threat directed away regression | 0.42 ( | 0.18 ( | 52.24 (p = 0.02) | |
0.56 (p = 0.01) | 0.31 (p = 0.01) | 43.41 ( |
p-values were obtained by permutation test (100 permutations). LOSO = leave-one-subject-out procedure; “10-fold” = 10-fold cross-validation procedure
Fig. 4MKL regression model based on the 10-fold cross-validation scheme: (a) Scatter plot between the actual and predicted threat perception indices for the MKL regression model based on patterns of brain activation to threat stimuli in the directed away context. (b) Images showing the voxels weights within the regions with highest contributions to the MKL regression model in sagittal or axial plane slices (“x” or “z” MNI coordinates, respectively). The top 10 regions ranked by the MKL regression model as relevant for predicting the threat perception index are shown; the regions’ weights (in percentage) are shown in parentheses. The color bars represents the full range of voxel weights within each region. The red circle highlights small region