| Literature DB >> 34067013 |
Jacob Israelashvili1, Lisanne S Pauw2, Disa A Sauter3, Agneta H Fischer3.
Abstract
Individual differences in understanding other people's emotions have typically been studied with recognition tests using prototypical emotional expressions. These tests have been criticized for the use of posed, prototypical displays, raising the question of whether such tests tell us anything about the ability to understand spontaneous, non-prototypical emotional expressions. Here, we employ the Emotional Accuracy Test (EAT), which uses natural emotional expressions and defines the recognition as the match between the emotion ratings of a target and a perceiver. In two preregistered studies (Ntotal = 231), we compared the performance on the EAT with two well-established tests of emotion recognition ability: the Geneva Emotion Recognition Test (GERT) and the Reading the Mind in the Eyes Test (RMET). We found significant overlap (r > 0.20) between individuals' performance in recognizing spontaneous emotions in naturalistic settings (EAT) and posed (or enacted) non-verbal measures of emotion recognition (GERT, RMET), even when controlling for individual differences in verbal IQ. On average, however, participants reported enjoying the EAT more than the other tasks. Thus, the current research provides a proof-of-concept validation of the EAT as a useful measure for testing the understanding of others' emotions, a crucial feature of emotional intelligence. Further, our findings indicate that emotion recognition tests using prototypical expressions are valid proxies for measuring the understanding of others' emotions in more realistic everyday contexts.Entities:
Keywords: emotion recognition; emotional accuracy; empathy; individual differences
Year: 2021 PMID: 34067013 PMCID: PMC8162550 DOI: 10.3390/jintelligence9020025
Source DB: PubMed Journal: J Intell ISSN: 2079-3200
Description of emotion recognition tasks.
| Task | Stimuli | Emotional Cues | Emotional Expression | Basis of Accuracy | Choice Options |
|---|---|---|---|---|---|
| RMET | Static pictures | Eyes (nonverbal) | Posed | Prototypical expression | Four (select one) |
| GERT | Dynamic videos | Voice, body and face (nonverbal) | Reenacted | Prototypical expression | Fourteen (select one) |
| EAT | Dynamic videos | Words, voice, facial and body movements (verbal and nonverbal) | Spontaneous | Targets’ emotions | Ten (select all applicable, rate each using 0–6 scale) |
Note. EAT, Emotional Accuracy Test; GERT, Geneva Emotion Recognition Test; RMET, Reading the Mind in the Eyes Test. An additional feature relevant to the stimuli is that the pictures of the RMET are all black and white, while the videos in the GERT and the EAT are all colorful. An additional feature relevant to the choice options is that in the RMET, every stimulus face is paired with a different four choice options, while in the GERT and the EAT, all stimuli use the same fourteen (GERT) or ten (EAT) choice options.
Pearson and Spearman rho correlation coefficients for the associations of performance as measured across pairs of tasks, in Study 1 and 2.
|
| |||||||
|
|
|
|
|
|
|
|
|
| GERT | 0.59 *** | GERT | 0.55 *** | ||||
| RMET | 0.60 *** | 0.65 *** | RMET | 0.55 *** | 0.65 *** | ||
| Verbal IQ | 0.31 *** | 0.37 *** | 0.45 *** | Verbal IQ | 0.39 *** | 0.34 *** | 0.45 *** |
|
| |||||||
|
|
|
|
|
|
|
|
|
| GERT | 0.25 ** | GERT | 0.22 ** | ||||
| RMET | 0.26 ** | 0.34 *** | RMET | 0.25 ** | 0.25 ** | ||
| Verbal IQ | 0.15 | 0.33 *** | 0.29 *** | Verbal IQ | 0.04 | 0.24 ** | 0.25 ** |
Note. All patterns of significant positive correlations between the three tasks remained the same when variance explained by Verbal IQ was partialled out (see Table S1 in the Supplemental Materials). EAT—Emotional Accuracy; GERT—Geneva Emotion Recognition Test; RMET—Reading the Mind in the Eyes Test; Verbal IQ; * p < 0.05; ** p < 0.01; *** p < 0.001; 95% confidence intervals (lower, upper).
Figure 1The relationship between accurate emotion recognition on the EAT and the GERT (left) and the RMET (right), in Study 1 (upper panel) and Study 2 (lower panel). Note. Grey denotes 95% confidence intervals.
Mean (and SD) values of enjoyment participants reported for the completion of the EAT vs. the RMET vs. GERT tasks, in Study 1 (USA) and Study 2 (UK).
| EAT | GERT | RMET | |
|---|---|---|---|
|
| 4.77 a (1.21) | 3.85 b (1.85) | 4.16 b (1.59) |
|
| 4.09 a (1.41) | 4.07 a (1.53) | 3.78 b (1.56) |
Note. EAT—Emotional Accuracy Test; GERT—Geneva Emotion Recognition Test; RMET—Reading the Mind in the Eyes. Within each study, numbers that do not share a superscript differ significantly at p < 0.05, with Bonferroni correction.