| Literature DB >> 34220474 |
Abstract
Naturalistic stimuli such as movies, music, and spoken and written stories elicit strong emotions and allow brain imaging of emotions in close-to-real-life conditions. Emotions are multi-component phenomena: relevant stimuli lead to automatic changes in multiple functional components including perception, physiology, behavior, and conscious experiences. Brain activity during naturalistic stimuli reflects all these changes, suggesting that parsing emotion-related processing during such complex stimulation is not a straightforward task. Here, I review affective neuroimaging studies that have employed naturalistic stimuli to study emotional processing, focusing especially on experienced emotions. I argue that to investigate emotions with naturalistic stimuli, we need to define and extract emotion features from both the stimulus and the observer.Entities:
Keywords: affective neuroscience; brain imaging; emotion; fMRI; movies; naturalistic stimuli; stories
Year: 2021 PMID: 34220474 PMCID: PMC8245682 DOI: 10.3389/fnhum.2021.675068
Source DB: PubMed Journal: Front Hum Neurosci ISSN: 1662-5161 Impact factor: 3.169
FIGURE 1A framework for extracting emotion features in naturalistic paradigms. (A) Defining emotion features with a consensual component model of emotional processing (see, e.g., Mauss and Robinson, 2009; Anderson and Adolphs, 2014; Sander et al., 2018). First, the observer evaluates the stimulus’s relevance during an emotion elicitation step. Second, the elicited emotion leads to automatic changes in several functional components. (B) Extracting emotion features and example feature time series. With naturalistic paradigms, emotion features can be extracted both from the stimulus and the observer. Stimulus features are related to perceived emotions (here, depicted for a movie stimulus), observer features model experienced emotions. The figure shows examples of potential emotion features and their time series. In the next methodological step, the stimulus and observer feature time series are used to model the neural time series.
Affective neuroimaging studies with naturalistic stimuli included in the review (organized by stimulus type and year of publication).
| Authors study [index] | fMRI sample size | Stimuli (duration hh:mm:ss) | Emotion features | Feature extraction methods | fMRI analysis methods |
| 25 | Humor onset Humor duration | Audience laughter | GLM | ||
| 13 | Comedy clips (4:00), | Experienced sadness and amusement | Retrospective ratings | Parametric GLM | |
| 16 | Movie segments (∼24:00) | Experienced valence and arousal | Retrospective ratings | Dynamic ISC, seed-voxel correlation analysis | |
| 17 | Experienced sadness Parasympathetic index (HF-HR) | Retrospective and independent sample ( | Intra- and inter-network cohesion index | ||
| 10 | Standup comedy clips (∼45:00) | Humor Viewer’s facial expressions Audience laughter | Simultaneous ratings Simultaneous face camera recording Annotations by independent sample | Decoding models | |
| 12 | Suspense | Ratings by independent sample ( | Parametric GLM | ||
| 43, 44 | Experienced sadness Parasympathetic index (HF-HR) | Retrospective ratings Simultaneous ECG recording | Network cohesion index | ||
| 18 | Experienced humor | Retrospective ratings Independent sample ( | Parametric GLM, IS-RSA | ||
| 203 | Experienced sadness Experienced fear Experienced anger | Retrospective ratings | Network cohesion index | ||
| 36 | Perceived funniness | Prospective and retrospective ratings | Parametric GLM, dynamic ISC | ||
| 24 | Separation scenes from romantic comedies (∼50:00) | Experienced sadness | Retrospective ratings | Parametric GLM | |
| 112 | Heart rate Experienced valence and arousal | Recording Independent sample ( | Intra- and inter-network cohesion index | ||
| 74 | Documentary (5:21) | Experienced anger | Retrospective ratings | Dependency network analysis | |
| 24 | Scene content | Annotations | ICA | ||
| 15 | Experienced happiness, surprise, sadness, disgust, fear, anger Portrayed emotions from 22 categories | Independent sample ( | Voxel-wise encoding models | ||
| 58 | Comedy clips (15:00) | Experienced humor | Retrospective ratings | ISC, similarity analysis | |
| 28 | Movie trailers (∼40:00) | Experienced valence and arousal Neural pattern for valence and arousal | Independent sample (51) IAPS pattern decoding | Correlation between decoding and ratings time series | |
| 112 | Experienced valence Emotional intensity Visual brightness Auditory loudness Faces on screen | Independent sample ( | Dynamic ISC, fGLS regression | ||
| 5 | Short video clips (∼8:00:00) | Experience of 34 emotion categories Experience of 14 affective dimensions Semantic features Visual features | Independent sample (N = 9-17) Automatic extraction Annotations | Decoding models, voxel-wise encoding models | |
| 37 | Acute fear onset Experienced fear | Annotations Retrospective ratings and independent sample ( | Dynamic ISC, SBPS | ||
| 8 | Short video clips (3:00:00) | Experience of 80 emotion categories | Independent sample ( | Voxelwise encoding models | |
| 14 | Heart rate Pupil dilation Valence of facial expressions Valence of scenes Use of language Mental states | Recordings Recordings Annotations Annotations Annotations Automatic annotation based on Neurosynth | Hidden Markov Models | ||
| 35 | Experienced surprise, intensity and valence Perceived importance Theory of mind Vividness of memory Episodic memory | Retrospective behavioral sampling from an independent sample ( | ISFC | ||
| 48 | Two episodes of | Experience of 16 emotion categories Facial behavior | Independent sample ( | Hidden Markov Models | |
| 24 | Experienced valence and intensity Heart rate variability Sound intensity Word frequency Action words | Independent sample ( | Parametric GLM | ||
| 20 | Auditory stories (30:00) | Experienced valence and arousal Heart rate, respiration | Retrospective ratings Recording | ISPS, SBPS | |
| 20 | Heart rate variability Sound energy envelope | Simultaneous recording Automatic extraction | Parametric GLM, DCM, dynamic ISC | ||
| 2 speakers, 16 listeners | Autobiographical stories (35:00) | Experienced valence and arousal | Retrospective ratings | Dynamic ISPS | |
| 14 | Frédéric Chopin’s | Musical tempo Experienced valence and arousal | Automatic extraction Proactive and retrospective ratings | Parametric GLM | |
| 15 | Classical music (Mendelssohn, Prokofiev, Schubert, each 9:00-14:00) | Experienced valence Experienced arousal Acoustic features | Retrospective ratings Independent sample ( | Dynamic ISC | |
| 36 | Experienced sadness and enjoyment Loudness, timbre | Retrospective ratings Automatic extraction | Parametric ISPS, SBPS | ||
| 24 | 60 passages from Harry Potter book series | Lexical valence and arousal Experienced valence and arousal | Normative ratings Retrospective ratings | Parametric GLM | |
| 23 | Experienced suspense | Simultaneous ratings | Parametric GLM |
Emotion features in naturalistic paradigms.
| Feature | Extraction method | Studies |
| Auditory | Automatic extraction (MIDI, MIRtoolbox) | |
| Visual: objects | Automatic extraction (deep neural network for object recognition) | |
| Visual: scene content | Annotations | |
| Visual: scene valence | Annotations | |
| Auditory: spoken dialogue | Annotations | |
| Auditory: emotional content | Annotations | |
| Auditory: non-emotional content | Annotations | |
| Semantic: words | Annotations | |
| Semantic: lexical valence | Normative ratings | |
| Semantic: lexical arousal | Normative ratings | |
| Characters: valence of facial expressions | Annotations | |
| Characters: emotions | Annotations | |
| Characters: empathy | Annotations | |
| Audience: laughter | Annotations | |
| Salience | Ratings | |
| Expected humor | Annotations | |
| Startle reflex | Annotations ( | |
| Heart rate (HF-HR and LF-HR) | Recording (e.g., ECG, pulse oximeter, pulse plethysmogram) | |
| Respiration | Recording (respiratory belt) | |
| Pupil diameter | Recording (eye-tracker) | |
| Mental states | Automatic annotations (Neurosynth) | |
| Valence/arousal patterns | IAPS picture pattern decoding | |
| Facial motion/behavior | Recording (face camera, facial marker motion, and AU analysis) | |
| Valence | Ratings | |
| Arousal/emotional intensity | Ratings | |
| Other affective dimensions | Ratings | |
| Amusement | Ratings, audience laughter | |
| Anger | Ratings | |
| Enjoyment | Ratings | |
| Fear | Ratings | |
| Sadness | Ratings | |
| Suspense | Ratings | |
| Basic emotions | Ratings | |
| Other emotion categories | Ratings | |
| Synchrony of portrayed and experienced emotions | Correlation of ratings | |
FIGURE 2Summary of brain regions correlating with emotion features. Dots denote an observed association between the brain region (rows) and the emotion feature (columns). Color panels denote feature categories (from left to right): low-level auditory features (red), object-level features (blue), portrayed emotions (green), emotion elicitation (violet), interoception (orange), behavior (yellow), affective dimensions (brown), emotion categories (pink), and emotional alignment (gray). Directionality of association and more detailed anatomical locations are listed in Supplementary Table 1.
FIGURE 3Summary of functional networks correlating with emotion features. Dots denote an observed association between the network (rows) and the emotion feature (columns). Color panels denote feature categories (from left to right): low-level auditory and visual features (red), object-level features (blue), emotion elicitation (violet), interoception (orange), affective dimensions (brown), and emotion categories (pink). Directionality of association and more detailed anatomical locations are listed in Supplementary Table 2.