| Literature DB >> 34896586 |
Heini Saarimäki1, Enrico Glerean2, Dmitry Smirnov3, Henri Mynttinen3, Iiro P Jääskeläinen4, Mikko Sams5, Lauri Nummenmaa6.
Abstract
Neurophysiological and psychological models posit that emotions depend on connections across wide-spread corticolimbic circuits. While previous studies using pattern recognition on neuroimaging data have shown differences between various discrete emotions in brain activity patterns, less is known about the differences in functional connectivity. Thus, we employed multivariate pattern analysis on functional magnetic resonance imaging data (i) to develop a pipeline for applying pattern recognition in functional connectivity data, and (ii) to test whether connectivity patterns differ across emotion categories. Six emotions (anger, fear, disgust, happiness, sadness, and surprise) and a neutral state were induced in 16 participants using one-minute-long emotional narratives with natural prosody while brain activity was measured with functional magnetic resonance imaging (fMRI). We computed emotion-wise connectivity matrices both for whole-brain connections and for 10 previously defined functionally connected brain subnetworks and trained an across-participant classifier to categorize the emotional states based on whole-brain data and for each subnetwork separately. The whole-brain classifier performed above chance level with all emotions except sadness, suggesting that different emotions are characterized by differences in large-scale connectivity patterns. When focusing on the connectivity within the 10 subnetworks, classification was successful within the default mode system and for all emotions. We thus show preliminary evidence for consistently different sustained functional connectivity patterns for instances of emotion categories particularly within the default mode system.Entities:
Keywords: Emotion; Functional connectivity; MVPA; Pattern classification; fMRI
Mesh:
Year: 2021 PMID: 34896586 PMCID: PMC8803541 DOI: 10.1016/j.neuroimage.2021.118800
Source DB: PubMed Journal: Neuroimage ISSN: 1053-8119 Impact factor: 6.556
Fig. 1(a) Trial structure. The highlighted time period (HRF-corrected) was used for calculating the connectivity matrices. (b) Functional brain systems analyzed in the present study, based on Power et al. (2011). Dots denote network nodes and colors denote subnetworks. (c) Connectivity matrices were calculated using Pearson correlation between each pair of 264 node time series for each subject and for each 60-s narrative. (d) The connectivity matrices were fed as input for a linear support vector classifier. (e) The classifier performance was evaluated by calculating the accuracy (percentage of correct classifier guesses per target category) and the confusion matrix (classifier guesses per category).
Fig. 2(a) Emotion-wise classification accuracies for the full-network classification. Dashed line represents naïve chance level (16.6%). Asterisks denote significance relative to chance level (*p < 0.01, ***p < 0.0001). Thick black line represents median of classification accuracies. Boxes show the 25th to 75th percentiles of classification accuracies and values outside this range are plotted as circles. Whiskers extend from box to the largest value no further than 1.5 * inter-quartile range from the edge of the box. (b) Classifier confusions from full network classification. Color code denotes average classifier accuracy over the cross-validation runs, cells shown in white have guesses below naïve chance level.
Fig. 3(a) Classification accuracies for connectivity within and between each ROI. Color code denotes classifier accuracy; cells shown in white have guesses below naïve chance level (16.6%). After correcting for multiple comparisons, only the accuracy for within default mode network connections remained significant. (b) Classifier confusions for subnetwork classification.
Fig. 4(a) Emotion-wise classification accuracies for connections within the default mode system. Dashed line represents naïve chance level (16.6%). Asterisks denote significance relative to chance level (**p < 0.001, ***p < 0.0001). Thick line represents median of classification accuracies. Boxes show the 25th to 75th percentiles of classification accuracies and values outside this range are plotted as dots. Whiskers extend from box to the largest value no further than 1.5 * inter-quartile range from the edge of the box. b) Classification accuracies and c) subnetwork confusion matrices for DMN subnetwork classification. Color code denotes classifier accuracy; cells shown in white have guesses below naïve chance level.