| Literature DB >> 35986068 |
Lea L Lott1,2, Franny B Spengler3,4, Tobias Stächele1, Bastian Schiller1,2, Markus Heinrichs5,6.
Abstract
Nonverbal expressions contribute substantially to social interaction by providing information on another person's intentions and feelings. While emotion recognition from dynamic facial expressions has been widely studied, dynamic body expressions and the interplay of emotion recognition from facial and body expressions have attracted less attention, as suitable diagnostic tools are scarce. Here, we provide validation data on a new open source paradigm enabling the assessment of emotion recognition from both 3D-animated emotional body expressions (Task 1: EmBody) and emotionally corresponding dynamic faces (Task 2: EmFace). Both tasks use visually standardized items depicting three emotional states (angry, happy, neutral), and can be used alone or together. We here demonstrate successful psychometric matching of the EmBody/EmFace items in a sample of 217 healthy subjects with excellent retest reliability and validity (correlations with the Reading-the-Mind-in-the-Eyes-Test and Autism-Spectrum Quotient, no correlations with intelligence, and given factorial validity). Taken together, the EmBody/EmFace is a novel, effective (< 5 min per task), highly standardized and reliably precise tool to sensitively assess and compare emotion recognition from body and face stimuli. The EmBody/EmFace has a wide range of potential applications in affective, cognitive and social neuroscience, and in clinical research studying face- and body-specific emotion recognition in patient populations suffering from social interaction deficits such as autism, schizophrenia, or social anxiety.Entities:
Mesh:
Year: 2022 PMID: 35986068 PMCID: PMC9391359 DOI: 10.1038/s41598-022-17866-w
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.996
Figure 1Examples of static frames of dynamic videos. (a) EmBody stimulus of the scale Happy showing a “La Ola” wave motion, (b) EmFace stimulus of the scale Angry, (c) the response window prompting participants to select the emotion they believe was portrayed in the preceeding EmBody or EmFace stimulus. Dynamic versions of the respective stimuli can be found online in the supplemental materials of this article.
Figure 2Detailed view of the 3D humanoid model used to create the final EmBody stimuli (point-light displays). The model consists of a human body (a), an animatable underlying skeleton (b), and the white spheres used to create the resulting point-light displays (c).
Characteristics of our sample in the validation study (mean ± SD).
| Male ( | Female ( | Cohen’s | ||
|---|---|---|---|---|
| Age | 24.9 ± 2.93 | 24.3 ± 3.02 | .12 | 0.20 |
| AQ | 18.5 ± 6.09 | 16.7 ± 5.32 | .02 | 0.31 |
| BDI-II | 6.5 ± 4.25 | 7.1 ± 5.20 | .35 | 0.13 |
| Verbal IQ | 108.4 ± 10.20 | 106.4 ± 8.49 | .13 | 0.21 |
| Raven | 7.5 ± 1.47 | 7.2 ± 1.60 | .11 | 0.19 |
Group differences were explored using two-tailed independent samples t-tests.
Participant performance for the EmBody and the EmFace and their scales (mean ± SD).
| Whole task | Scale angry | Scale happy | Scale neutral | |
|---|---|---|---|---|
| EmBody | 31.86 ± 3.72 | 10.49 ± 2.28 | 10.74 ± 2.36 | 10.63 ± 2.51 |
| EmFace | 31.96 ± 3.77 | 10.69 ± 2.38 | 10.74 ± 2.33 | 10.53 ± 2.50 |
Value ranges for all scales are reported in parentheses.
Correlations between scores in the EmBody and the EmFace.
| EmFace | EmBody | |||
|---|---|---|---|---|
| Whole task | Scale angry | Scale happy | Scale neutral | |
| Whole task | .20** | .13 | .07 | |
| Scale Angry | .11 | .14* | − .14* | |
| Scale Happy | .14* | .26*** | − .17* | |
| Scale Neutral | .12 | − .16* | − .09 | . |
Scores were collected at Session 1. Asterisks indicate statistically significant Spearman rank (r) correlation coefficients: *p < .05, **p < .01, ***p < .001. Correlations for corresponding scales are printed in bold type.
Retest reliability of the EmBody and the EmFace as intraclass correlation coefficients (ICC) for raw hit rates.
| Whole task | Angry scale | Happy scale | Neutral scale | |
|---|---|---|---|---|
| EmBody | .71 [.63, .78] | .74 [.65, .81] | .78 [.71, .83] | .79 [.73, .84] |
| EmFace | .72 [.63, .78] | .77 [.71, .83] | .69 [.60, .76] | .77 [.70, .83] |
ICCs use the two-way mixed effects model, type absolute agreement, average measurement. 95% confidence intervals (95% CI) for each ICC are reported in square brackets. For comparison, the ICC computed for RMET scores was .73 [.65, .79].
Figure 3Relationship between sum scores in the EmBody and the RMET. The graph shows a line of best fit and the 95% confidence interval (shaded bands). Dots are semi-transparent so that locations with overlapping data points are darker.
Figure 4Relationship between the EmFace and (a) RMET sum scores and (b) AQ sum scores, respectively. Each graph shows a line of best fit and the 95% confidence interval (shaded bands). Dots are semi-transparent so that locations with overlapping data points are darker.