| Literature DB >> 27598534 |
Jongwan Kim1, Jing Wang2, Douglas H Wedell1, Svetlana V Shinkareva1.
Abstract
Recent research has demonstrated that affective states elicited by viewing pictures varying in valence and arousal are identifiable from whole brain activation patterns observed with functional magnetic resonance imaging (fMRI). Identification of affective states from more naturalistic stimuli has clinical relevance, but the feasibility of identifying these states on an individual trial basis from fMRI data elicited by dynamic multimodal stimuli is unclear. The goal of this study was to determine whether affective states can be similarly identified when participants view dynamic naturalistic audiovisual stimuli. Eleven participants viewed 5s audiovisual clips in a passive viewing task in the scanner. Valence and arousal for individual trials were identified both within and across participants based on distributed patterns of activity in areas selectively responsive to audiovisual naturalistic stimuli while controlling for lower level features of the stimuli. In addition, the brain regions identified by searchlight analyses to represent valence and arousal were consistent with previously identified regions associated with emotion processing. These findings extend previous results on the distributed representation of affect to multimodal dynamic stimuli.Entities:
Mesh:
Year: 2016 PMID: 27598534 PMCID: PMC5012606 DOI: 10.1371/journal.pone.0161589
Source DB: PubMed Journal: PLoS One ISSN: 1932-6203 Impact factor: 3.240
Description of audiovisual stimuli.
Means and standard deviations are shown.
| Description | Affective Category | F-test | ||||
|---|---|---|---|---|---|---|
| Negative Valence, High Arousal | Negative Valence, Low Arousal | Positive Valence, High Arousal | Positive Valence, Low Arousal | Valence | Arousal | |
| Valence | -0.92 (0.24) | -0.83 (0.25) | 0.78 (0.16) | 0.96 (0.16) | ||
| Arousal | 0.37 (0.11) | -0.24 (0.27) | 0.25 (0.18) | -0.38 (0.29) | ||
| Hue | 0.35 (0.12) | 0.26 (0.11) | 0.41 (0.18) | 0.34 (0.24) | ||
| Saturation | 0.25 (0.08) | 0.27 (0.16) | 0.33 (0.16) | 0.35 (0.18) | ||
| Value (Brightness) | 0.47 (0.09) | 0.52 (0.07) | 0.60 (0.12) | 0.54 (0.14) | ||
| Amplitude (dB) (left) | 10.00 (3.38) | 14.75 (3.26) | 13.78 (2.79) | 11.85 (2.86) | ||
| Amplitude (dB) (right) | 10.44 (3.55) | 14.76 (3.26) | 13.73 (2.80) | 11.31 (2.60) | ||
| Frequency (Hz) (left) | 379.22 (361.05) | 241.26 (137.93) | 465.58 (697.45) | 1161.76 (1441.87) | ||
| Frequency (Hz) (right) | 454.71 (372.69) | 241.66 (137.70) | 491.21 (687.18) | 1170.00 (1435.29) | ||
| Motion 1 (slow and drifting) | 122108.01 (54831.82) | 36616.89 (34589.29) | 91659.27 (44101.18) | 32263.66 (31748.91) | ||
| Motion 2 | 53812.82 (22459.94) | 17790.62 (14413.37) | 43402.15 (20830.68) | 15407.53 (14798.81) | ||
| Motion 3 | 38860.56 (15262.57) | 14337.7 (10537.67) | 30810.13 (14032) | 12256.57 (11724.12) | ||
| Motion 4 | 24938.01 (9014.75) | 10388.2 (6386.2) | 18924.16 (8638.21) | 8335.32 (7803.32) | ||
| Motion 5 | 15024.27 (4520.56) | 6986.16 (3533.85) | 12377.26 (5407.69) | 5579.42 (4900.99) | ||
| Motion 6 | 7305.06 (2337.75) | 3863.75 (1697.84) | 6830.85 (3381.41) | 2720.06 (2723.28) | ||
| Motion 7 (fast and transient) | 2597.47 (846.05) | 1308.79 (695.17) | 2668.86 (2197.91) | 989.22 (911.23) | ||
Note:
** p < .01
*** p < .001.
Hue, saturation, and value (brightness) were measured on 0 to 1 HSV scale; motion features were measured by the number of pixels of differences between frames.
Fig 1Lower dimensional representation of affective videos based on behavioral data.
A two-dimensional solution from a separate group of participants described the data well (stress = .282, R2 = .543, n = 49).
Fig 2A schematic representation of the presentation timing.
(A) Functional localizer. Participants were presented with baseline, auditory (beep), dynamic visual (checkerboard), and naturalistic audiovisual stimuli in a block design. Each block lasted for 12s. (B) Main experiment. Participants were presented with naturalistic audiovisual stimuli selected from the four quadrants of the affective space: high arousal negative valence (HN), low arousal negative valence (LN), low arousal positive valence (LP), and high arousal positive valence (HP). Each audiovisual clip was presented for 5s, followed by 7s fixation.
Number of voxels for each of the masks reported by participant.
| Participant | Gray matter | VA | VP | AP | VA∩(VP⋃AP)c | GM∩(VA∩(VP⋃AP)c) |
|---|---|---|---|---|---|---|
| 1 | 12243 | 15554 | 7475 | 577 | 8864 | 2551 |
| 2 | 20279 | 11723 | 4006 | 682 | 8066 | 3086 |
| 3 | 24024 | 12255 | 15253 | 1286 | 4031 | 2040 |
| 4 | 12544 | 13294 | 6664 | 5193 | 6590 | 1261 |
| 5 | 16064 | 8926 | 3028 | 1008 | 5855 | 1977 |
| 6 | 15931 | 11947 | 10337 | 3298 | 3195 | 1031 |
| 7 | 17528 | 5221 | 1784 | 185 | 3449 | 1207 |
| 8 | 15766 | 8078 | 3372 | 538 | 5039 | 1351 |
| 9 | 21940 | 7558 | 1949 | 2048 | 5032 | 2142 |
| 10 | 8689 | 9935 | 3504 | 433 | 6810 | 1101 |
| 11 | 20825 | 13872 | 9461 | 1202 | 6674 | 2951 |
VA, voxels that were more responsive to audiovisual condition compared to baseline (p < .05, FWE-corrected, cluster size > 5); VP, voxels that were more responsive to checkerboard condition compared to baseline (p < .05, FWE-corrected, cluster size > 5); AP, voxels that were more responsive to beep condition compared to baseline (p < .05, FWE -corrected, cluster size > 5); VA∩(VP⋃AP)c: voxels that were more responsive to audiovisual condition compared to baseline (p < .05, FWE-corrected, cluster size > 5), but excluding those voxels that were more responsive to checkerboard condition compared to baseline (p < .05, FWE-corrected, cluster size > 5) and those voxels that were more responsive to beep condition compared to baseline (p < .05, FWE -corrected, cluster size > 5); GM∩(VA∩(VP⋃AP)c), the intersection between individual gray matter mask and VA∩(VP⋃AP)c. Participants are ordered by within-participant classification performance (see below).
Fig 3MVPA results for valence.
(A) Classification accuracies for within-participant (filled bars) and cross-participants (unfilled bars) valence identification. Participants are ordered by within-participant classification performance. (B) Four clusters; the left medial prefrontal cortex (mPFC), the right posterior part of the cingulate cortex (PCC), the left superior/middle temporal gyrus (STG/MTG), and middle frontal gyrus (MFG) are shown on axial slices.
Searchlight results for valence and arousal.
| MNI coordinates | |||||||
|---|---|---|---|---|---|---|---|
| Anatomical region | Hemisphere | Cluster size | x | y | z | T | Z |
| Valence | |||||||
| PCC | R | 109 | 6 | -46 | 31 | 8.89 | 4.58 |
| MFG | R | 87 | 30 | 11 | 52 | 7.57 | 4.28 |
| STG/MTG | L | 46 | -63 | -58 | 10 | 8.23 | 4.44 |
| mPFC | L | 44 | -15 | 56 | 1 | 9.25 | 4.66 |
| Arousal | |||||||
| PC | R | 56 | 9 | -49 | 52 | 6.67 | 4.03 |
| OFC | R | 54 | 27 | 62 | -17 | 7.47 | 4.25 |
Note: p < .001, uncorrected, cluster size > 40 for valence and 50 for arousal. R, right; L, left; cluster size reported in voxels; T indicates peak t values; Z indicates peak z values; OFC: anterior part of orbitofrontal cortex; PC: precuneus; mPFC: medial prefrontal cortex; PCC: posterior part of the cingulate cortex; STG/MTG: superior/middle temporal gyrus; MFG: middle frontal gyrus.
Fig 4MVPA results for arousal.
(A) Classification accuracies for within-participant (filled bars) and across-participants (unfilled bars) arousal identification. (B) Two clusters, the right anterior part of orbitofrontal cortex (OFC) and the right precuneus (PC) are shown on sagittal slices.