| Literature DB >> 29922202 |
Adi Lausen1,2, Annekathrin Schacht1,2.
Abstract
The conflicting findings from the few studies conducted with regard to gender differences in the recognition of vocal expressions of emotion have left the exact nature of these differences unclear. Several investigators have argued that a comprehensive understanding of gender differences in vocal emotion recognition can only be achieved by replicating these studies while accounting for influential factors such as stimulus type, gender-balanced samples, number of encoders, decoders, and emotional categories. This study aimed to account for these factors by investigating whether emotion recognition from vocal expressions differs as a function of both listeners' and speakers' gender. A total of N = 290 participants were randomly and equally allocated to two groups. One group listened to words and pseudo-words, while the other group listened to sentences and affect bursts. Participants were asked to categorize the stimuli with respect to the expressed emotions in a fixed-choice response format. Overall, females were more accurate than males when decoding vocal emotions, however, when testing for specific emotions these differences were small in magnitude. Speakers' gender had a significant impact on how listeners' judged emotions from the voice. The group listening to words and pseudo-words had higher identification rates for emotions spoken by male than by female actors, whereas in the group listening to sentences and affect bursts the identification rates were higher when emotions were uttered by female than male actors. The mixed pattern for emotion-specific effects, however, indicates that, in the vocal channel, the reliability of emotion judgments is not systematically influenced by speakers' gender and the related stereotypes of emotional expressivity. Together, these results extend previous findings by showing effects of listeners' and speakers' gender on the recognition of vocal emotions. They stress the importance of distinguishing these factors to explain recognition ability in the processing of emotional prosody.Entities:
Keywords: affect bursts; emotion recognition accuracy; gender differences; speech-embedded emotions; voice
Year: 2018 PMID: 29922202 PMCID: PMC5996252 DOI: 10.3389/fpsyg.2018.00882
Source DB: PubMed Journal: Front Psychol ISSN: 1664-1078
Demographic characteristics of the study population.
| Words | Females | 71 | 23.10 (3.31) | 1 | 46 | 21 | 3 |
| Males | 74 | 24.86 (3.80) | 1 | 45 | 20 | 8 | |
| Sentences | Females | 72 | 22.72 (3.29) | 46 | 20 | 6 | |
| Males | 73 | 24.57 (4.06) | 44 | 12 | 17 | ||
HS-Dipl, Highschool diploma (i.e., Realschulabschluss); BA, Bachelor; MA, Master.
Features of the selected emotion speech databases.
| 22 drama students (10 male/12 female) | Anger, affection, contempt, despair, fear, happiness, sensual satisfaction, triumph, neutral | Word | ||
| 10 untrained actors (5 male/5 female) | Anger, boredom, disgust, fear, happiness, sadness, neutral | Semantic neutral sentences | ||
| 2 actors (1 male/1female) | Anger, disgust, fear, happiness, sadness, neutral | Pseudo-words Nouns | ||
| 10 actors (5 male/5 female) | Anger, disgust, fear, happiness, pain, pleasure, sadness, surprise, neutral | Affect bursts | ||
| 2 actors (1 male/1female) | Anger, disgust, fear, happiness, sadness, surprise, neutral | Pseudo- sentences Lexical sentences |
The word databases it is used as a generic term as some of the selected stimuli are from researchers that developed their own stimulus materials with no aim of establishing a database (i.e., Anna and Paulmann prosodic stimuli).
The nouns from WASEP are classified according to their positive, negative and neutral semantic content.
Paulmann lexical sentences consists of semantically and prosodically matching stimuli. Compared to all other types of stimuli, which were cross-over designed (i.e., stimulus is spoken in all emotional categories) both, the pseudo- and lexical sentences from Paulmann et al. (.
Figure 1(A) Group words (n = 145, 71 females). Bar charts showing the performance accuracy by listeners' gender. Error bars represent the standard error. As it can be observed, for the majority of emotion categories by databases, females had higher decoding performance accuracy than males. (B) Group sentences (n = 145, 72 females). Bar charts showing the performance accuracy by listeners' gender. Error bars represent the standard error. As it can be observed, for the majority of emotion categories by databases, females had higher decoding performance accuracy than males.
Group words: Means, standard deviations, z-scores, p-values, and effect sizes of performance accuracy by listeners' gender.
| Anna | 71 | 0.86 (0.09) | 74 | 0.83 (0.12) | 1.26 | 1.00 | 0.26 | |
| 71 | 0.57 (0.16) | 74 | 0.54 (0.15) | 1.03 | 1.00 | 0.17 | ||
| 71 | 0.29 (0.11) | 74 | 0.29 (0.11) | −0.23 | 1.00 | 0.01 | ||
| 71 | 0.80 (0.16) | 74 | 0.82 (0.14) | −0.43 | 1.00 | 0.12 | ||
| Overall | 71 | 0.63 (0.06) | 74 | 0.62 (0.06) | 0.82 | 1.00 | 0.15 | |
| Pseudo-words | 71 | 0.92 (0.09) | 74 | 0.89 (0.12) | 1.48 | 0.967 | 0.24 | |
| 71 | 0.65 (0.22) | 74 | 0.56 (0.22) | 2.65 | 0.057 | 0.41 | ||
| 71 | 0.60 (0.19) | 74 | 0.58 (0.19) | 0.82 | 1.00 | 0.15 | ||
| 71 | 0.58 (0.17) | 74 | 0.54 (0.21) | 1.27 | 1.00 | 0.23 | ||
| 71 | 0.74 (0.16) | 74 | 0.74 (0.20) | −0.50 | 1.00 | 0.00 | ||
| 71 | 0.54 (0.29) | 74 | 0.51 (0.28) | 0.65 | 1.00 | 0.11 | ||
| 71 | 0.67 (0.11) | 74 | 0.64 (0.12) | 1.74 | 0.572 | 0.32 | ||
| Semantic positive nouns | 71 | 0.84 (0.11) | 74 | 0.81 (0.13) | 1.69 | 0.637 | 0.30 | |
| 71 | 0.59 (0.22) | 74 | 0.57 (0.23) | 0.67 | 1.00 | 0.11 | ||
| 71 | 0.74 (0.16) | 74 | 0.73 (0.17) | 0.65 | 1.00 | 0.08 | ||
| 71 | 0.67 (0.14) | 74 | 0.64 (0.15) | 1.31 | 1.00 | 0.22 | ||
| 71 | 0.81 (0.15) | 74 | 0.80 (0.15) | 0.68 | 1.00 | 0.11 | ||
| 71 | 0.62 (0.26) | 74 | 0.56 (0.27) | 1.28 | 1.00 | 0.21 | ||
| 71 | 0.71 (0.08) | 74 | 0.68 (0.09) | 1.96 | 0.352 | 0.34 | ||
| Semantic negative nouns | 71 | 0.85 (0.12) | 74 | 0.84 (0.11) | 0.90 | 1.00 | 0.08 | |
| 71 | 0.55 (0.21) | 74 | 0.56 (0.22) | −0.35 | 1.00 | 0.07 | ||
| 71 | 0.77 (0.13) | 74 | 0.74 (0.15) | 1.42 | 1.00 | 0.23 | ||
| 71 | 0.64 (0.15) | 74 | 0.64 (0.19) | −0.60 | 1.00 | 0.01 | ||
| 71 | 0.81 (0.17) | 74 | 0.81 (0.16) | 0.18 | 1.00 | 0.01 | ||
| 71 | 0.63 (0.26) | 74 | 0.59 (0.26) | 1.26 | 1.00 | 0.19 | ||
| Overall | 71 | 0.71 (0.07) | 74 | 0.69 (0.09) | 0.62 | 1.00 | 0.15 | |
| Semantic neutral nouns | 71 | 0.89 (0.10) | 74 | 0.88 (0.10) | 1.08 | 1.00 | 0.16 | |
| 71 | 0.55 (0.21) | 74 | 0.54 (0.21) | 0.24 | 1.00 | 0.03 | ||
| 71 | 0.75 (0.17) | 74 | 0.72 (0.18) | 1.09 | 1.00 | 0.15 | ||
| 71 | 0.68 (0.15) | 74 | 0.67 (0.20) | −0.22 | 1.00 | 0.06 | ||
| 71 | 0.89 (0.13) | 74 | 0.87 (0.14) | 0.43 | 1.00 | 0.12 | ||
| 71 | 0.51 (0.28) | 74 | 0.47 (0.27) | 0.99 | 1.00 | 0.16 | ||
| Overall | 71 | 0.71 (0.08) | 74 | 0.69 (0.10) | 1.17 | 1.00 | 0.23 | |
| Overall | 71 | 0.69 (0.07) | 74 | 0.67 (0.08) | 1.39 | 0.163 | 0.31 | |
The group comparisons between males and females were made using the Wilcoxon-Mann-Whitney test. The decoding performance accuracy was higher for females as indicated by positive z-scores and higher for males as indicated by negative z-scores. The tests were conducted for each emotion separately, across all emotions and stimulus types. All p-values were Bonferroni corrected.
Group Sentences: Means, standard deviations, z-scores, p-values and effect sizes of performance accuracy by listeners' gender.
| Affect bursts | 72 | 0.64 (0.13) | 73 | 0.65 (0.16) | −0.81 | 1.00 | 0.07 | |
| 72 | 0.83 (0.10) | 73 | 0.83 (0.12) | 0.23 | 1.00 | 0.07 | ||
| 72 | 0.69 (0.18) | 73 | 0.68 (0.20) | 0.07 | 1.00 | 0.03 | ||
| 72 | 0.98 (0.05) | 73 | 0.96 (0.11) | 0.80 | 1.00 | 0.27 | ||
| 72 | 0.96 (0.06) | 73 | 0.95 (0.08) | 0.20 | 1.00 | 0.11 | ||
| 72 | 0.96 (0.09) | 73 | 0.96 (0.08) | 0.24 | 1.00 | 0.04 | ||
| 72 | 0.61 (0.20) | 73 | 0.56 (0.23) | 1.26 | 1.00 | 0.23 | ||
| 72 | 0.81 (0.05) | 73 | 0.80 (0.06) | 0.86 | 1.00 | 0.22 | ||
| Pseudo-sentences | 72 | 0.86 (0.12) | 73 | 0.81 (0.13) | 1.92 | 0.434 | 0.32 | |
| 72 | 0.50 (0.17) | 73 | 0.45 (0.17) | 1.97 | 0.393 | 0.28 | ||
| 72 | 0.59 (0.17) | 73 | 0.54 (0.18) | 1.83 | 0.533 | 0.30 | ||
| 72 | 0.63 (0.16) | 73 | 0.57 (0.16) | 1.99 | 0.368 | 0.37 | ||
| 72 | 0.92 (0.09) | 73 | 0.92 (0.09) | −0.26 | 1.00 | 0.02 | ||
| 72 | 0.80 (0.13) | 73 | 0.76 (0.15) | 1.74 | 0.649 | 0.28 | ||
| 72 | 0.39 (0.17) | 73 | 0.41 (0.16) | −0.49 | 1.00 | 0.12 | ||
| 72 | 0.67 (0.06) | 73 | 0.64 (0.06) | 2.87 | 0.033 | 0.49 | ||
| Lexical sentences | 72 | 0.96 (0.05) | 73 | 0.96 (0.06) | −1.19 | 1.00 | 0.10 | |
| 72 | 0.69 (0.20) | 73 | 0.66 (0.16) | 0.96 | 1.00 | 0.11 | ||
| 72 | 0.78 (0.15) | 73 | 0.78 (0.17) | −0.29 | 1.00 | 0.01 | ||
| 72 | 0.75 (0.16) | 73 | 0.75 (0.17) | −0.20 | 1.00 | 0.00 | ||
| 72 | 0.91 (0.09) | 73 | 0.92 (0.08) | −0.73 | 1.00 | 0.17 | ||
| 72 | 0.91 (0.08) | 73 | 0.90 (0.09) | 0.53 | 1.00 | 0.10 | ||
| 72 | 0.34 (0.18) | 73 | 0.40 (0.20) | −2.08 | 0.298 | 0.35 | ||
| 72 | 0.76 (0.07) | 73 | 0.77 (0.06) | −0.69 | 1.00 | 0.13 | ||
| Neutral sentences | 72 | 0.97 (0.05) | 73 | 0.98 (0.03) | −1.29 | 1.00 | 0.26 | |
| 72 | 0.69 (0.14) | 73 | 0.66 (0.17) | 0.48 | 1.00 | 0.14 | ||
| 72 | 0.60 (0.21) | 73 | 0.54 (0.18) | 1.81 | 0.491 | 0.30 | ||
| 72 | 0.78 (0.13) | 73 | 0.74 (0.16) | 1.45 | 1.00 | 0.31 | ||
| 72 | 0.86 (0.13) | 73 | 0.86 (0.13) | 0.35 | 1.00 | 0.04 | ||
| 72 | 0.78 (0.19) | 73 | 0.76 (0.18) | 0.75 | 1.00 | 0.09 | ||
| 72 | 0.78 (0.08) | 73 | 0.76 (0.08) | 1.86 | 0.442 | 0.30 | ||
| Overall | 72 | 0.75 (0.05) | 73 | 0.73 (0.05) | 1.60 | 0.110 | 0.31 | |
The group comparisons between males and females were made using the Wilcoxon-Mann-Whitney test. The decoding performance accuracy was higher for females as indicated by positive z-scores and higher for males as indicated by negative z-scores. The tests were conducted for each emotion separately, across all emotions and stimulus types. All p-values were Bonferroni corrected.
Figure 2(A) Group words (n = 145, 71 females). Bar charts showing the performance accuracy of identifying emotions by speakers'gender. Error bars represent the standard error. Asterisks mark the significance level: *p < 0.05, **p<0.01, ***p<0.001. As it can be observed, for the majority of emotion categories by databases, the correct identification rates were higher for emotions uttered in a male than a female voice. (B) Group sentences (n = 145, 72 females). Bar charts showing the performance accuracy of identifying emotions by speakers' gender. Error bars represent the standard error. Asterisks mark the significance level: *p < 0.05, **p < 0.01, ***p < 0.001. As it can be observed, for the majority of emotion categories by databases, the correct identification rates were higher for emotion uttered in a female than a male voice.
Group Words: Means, standard deviations, z-scores, p-values, and effect sizes of identification rates by speakers' gender.
| 71 | 0.51 | 74 | 0.61 | 0.18 | −6.01 | <0.001 | 0.57 | ||
| 71 | 0.17 | 74 | 0.43 | 0.20 | −9.72 | <0.001 | 1.32 | ||
| 71 | 0.88 | 74 | 0.80 | 0.15 | −5.41 | <0.001 | 0.48 | ||
| 71 | 0.81 | 74 | 0.81 | 0.17 | −0.45 | 1.00 | 0.01 | ||
| 71 | 0.59 | 74 | 0.67 | 0.09 | −7.97 | <0.001 | 0.84 | ||
| Pseudo-words | 71 | 0.65 | 74 | 0.53 | 0.25 | 5.21 | <0.001 | 0.47 | |
| 71 | 0.61 | 74 | 0.51 | 0.19 | 5.63 | <0.001 | 0.51 | ||
| 71 | 0.51 | 74 | 0.54 | 0.28 | −1.54 | 0.873 | 0.12 | ||
| 71 | 0.84 | 74 | 0.96 | 0.13 | −9.23 | <0.001 | 0.99 | ||
| 71 | 0.67 | 74 | 0.54 | 0.32 | 4.73 | <0.001 | 0.42 | ||
| 71 | 0.78 | 74 | 0.72 | 0.21 | 3.10 | 0.013 | 0.26 | ||
| 71 | 0.68 | 74 | 0.63 | 0.09 | 4.95 | <0.001 | 0.44 | ||
| Semantic positive nouns | 71 | 0.71 | 74 | 0.77 | 0.21 | −3.10 | 0.013 | 0.28 | |
| 71 | 0.69 | 74 | 0.61 | 0.23 | 3.89 | <0.001 | 0.32 | ||
| 71 | 0.56 | 74 | 0.61 | 0.32 | −2.25 | 0.170 | 0.16 | ||
| 71 | 0.71 | 74 | 0.94 | 0.18 | −10.08 | <0.001 | 1.29 | ||
| 71 | 0.70 | 74 | 0.45 | 0.27 | −8.65 | <0.001 | 0.93 | ||
| 71 | 0.77 | 74 | 0.84 | 0.21 | −4.60 | <0.001 | 0.32 | ||
| 71 | 0.69 | 74 | 0.70 | 0.09 | −1.51 | 1.00 | 0.15 | ||
| Semantic negative nouns | 71 | 0.68 | 74 | 0.83 | 0.22 | −6.92 | <0.001 | 0.68 | |
| 71 | 0.58 | 74 | 0.71 | 0.21 | −6.56 | <0.001 | 0.63 | ||
| 71 | 0.58 | 74 | 0.62 | 0.30 | −1.95 | 0.362 | 0.13 | ||
| 71 | 0.74 | 74 | 0.95 | 0.19 | 9.74 | <0.001 | 1.41 | ||
| 71 | 0.63 | 74 | 0.48 | 0.29 | 5.50 | <0.001 | 0.51 | ||
| 71 | 0.79 | 74 | 0.83 | 0.18 | −3.75 | 0.001 | 0.26 | ||
| 71 | 0.66 | 74 | 0.74 | 0.09 | −7.72 | <0.001 | 0.78 | ||
| Semantic neutral nouns | 71 | 0.67 | 74 | 0.81 | 0.26 | −5.80 | <0.001 | 0.55 | |
| 71 | 0.62 | 74 | 0.72 | 0.24 | −4.82 | <0.001 | 0.43 | ||
| 71 | 0.47 | 74 | 0.51 | 0.30 | −2.14 | 0.227 | 0.16 | ||
| 71 | 0.81 | 74 | 0.96 | 0.16 | −8.82 | <0.001 | 0.89 | ||
| 71 | 0.69 | 74 | 0.40 | 0.29 | 8.53 | <0.001 | 1.00 | ||
| 71 | 0.90 | 74 | 0.86 | 0.15 | 3.82 | <0.001 | 0.27 | ||
| 71 | 0.69 | 74 | 0.71 | 0.10 | −1.74 | 0.578 | 0.18 | ||
| Overall | 71 | 0.67 | 74 | 0.69 | 0.07 | −4.51 | <0.001 | 0.40 | |
The group comparisons between males and females were made using the Wilcoxon-rank-sum test. SD(Δ) = standard deviation of the difference of relative frequencies female and relative frequencies male. Positive z-scores indicate that the emotional portrayals had higher identification rates when spoken by female actors', whereas negative z-scores denote that the performance was higher when the emotions were spoken by male actors'. The tests were conducted for each emotion separately, across all emotions and stimulus types. All p-values were Bonferroni corrected.
Group Sentences: Means, standard deviations, z-scores, p-values and effect sizes of identification rates by speakers' gender.
| Affect bursts | 72 | 0.75 | 73 | 0.62 | 0.28 | 5.25 | <0.001 | 0.48 | |
| 72 | 0.97 | 73 | 0.96 | 0.19 | −1.09 | 1.00 | 0.12 | ||
| 72 | 0.99 | 73 | 0.93 | 0.12 | 5.41 | <0.001 | 0.48 | ||
| 72 | 0.69 | 73 | 0.61 | 0.24 | 4.52 | <0.001 | 0.32 | ||
| 72 | 0.99 | 73 | 0.68 | 0.22 | 10.28 | <0.001 | 1.51 | ||
| 72 | 0.97 | 73 | 0.94 | 0.13 | 3.24 | 0.009 | 0.26 | ||
| 72 | 0.53 | 73 | 0.63 | 0.26 | −3.96 | <0.001 | 0.37 | ||
| 72 | 0.84 | 73 | 0.77 | 0.09 | 8.10 | <0.001 | 0.80 | ||
| Pseudo-sentences | 72 | 0.73 | 73 | 0.40 | 0.27 | 9.47 | <0.001 | 1.18 | |
| 72 | 0.50 | 73 | 0.70 | 0.23 | −8.10 | <0.001 | 0.86 | ||
| 72 | 0.90 | 73 | 0.66 | 0.25 | 8.50 | <0.001 | 0.93 | ||
| 72 | 0.98 | 73 | 0.69 | 0.25 | 10.21 | <0.001 | 1.21 | ||
| 72 | 0.31 | 73 | 0.63 | 0.22 | −9.76 | <0.001 | 1.42 | ||
| 72 | 0.97 | 73 | 0.86 | 0.12 | 8.60 | <0.001 | 0.89 | ||
| 72 | 0.55 | 73 | 0.25 | 0.25 | 9.43 | <0.001 | 1.19 | ||
| 72 | 0.71 | 73 | 0.60 | 0.09 | 9.49 | <0.001 | 1.22 | ||
| Lexical sentences | 72 | 0.79 | 73 | 0.77 | 0.22 | 0.72 | 1.00 | 0.10 | |
| 72 | 0.73 | 73 | 0.77 | 0.18 | −2.91 | 0.029 | 0.25 | ||
| 72 | 0.97 | 73 | 0.84 | 0.17 | 7.97 | <0.001 | 0.77 | ||
| 72 | 0.99 | 73 | 0.93 | 0.09 | 7.18 | <0.001 | 0.64 | ||
| 72 | 0.78 | 73 | 0.57 | 0.25 | 7.90 | <0.001 | 0.82 | ||
| 72 | 0.94 | 73 | 0.90 | 0.13 | 4.46 | <0.001 | 0.33 | ||
| 72 | 0.48 | 73 | 0.27 | 0.19 | 9.10 | <0.001 | 1.10 | ||
| 72 | 0.80 | 73 | 0.72 | 0.07 | 9.78 | <0.001 | 1.20 | ||
| Neutral sentences | 72 | 0.72 | 73 | 0.42 | 0.20 | 10.09 | <0.001 | 1.48 | |
| 72 | 0.78 | 73 | 0.74 | 0.17 | 2.36 | 0.128 | 0.19 | ||
| 72 | 0.78 | 73 | 0.76 | 0.23 | 0.21 | 1.00 | 0.08 | ||
| 72 | 0.98 | 73 | 0.96 | 0.07 | 3.88 | <0.001 | 0.33 | ||
| 72 | 0.91 | 73 | 0.44 | 0.20 | 10.47 | <0.001 | 2.39 | ||
| 72 | 0.93 | 73 | 0.79 | 0.17 | 8.31 | <0.001 | 0.80 | ||
| 72 | 0.83 | 73 | 0.71 | 0.07 | 10.23 | <0.001 | 1.62 | ||
| Overall | 72 | 0.79 | 73 | 0.69 | 0.04 | 10.48 | <0.001 | 2.29 | |
The group comparisons between males and females were made using the Wilcoxon-rank-sum test. SD(Δ) = standard deviation of the difference of relative frequencies female and relative frequencies male. Positive z-scores indicate that the emotional portrayals had higher identification rates when spoken by female actors', whereas negative z-scores denote that the performance was higher when the emotions were spoken by male actors'. The tests were conducted for each emotion separately, across all emotions and stimulus types. All p-values were Bonferroni corrected.
Figure 3Group words (n = 145, 71 females). Bar charts showing the performance accuracy listeners' gender, speakers' gender and emotion categories. Error bars represent the standard error. As it can be observed, the second order interaction pattern is explained by the inspection of the average ratings showing different gender patterns conditional on emotion categories. For instance, female listeners had higher recognition accuracy for the emotion category happiness encoded in the female voice and higher recognition accuracy for the neutral category encoded in the male voice. In contrast, male listeners had lower recognition accuracy for the emotion category sad when spoken by a female.
| Bonebright et al., | Short stories | ↑ | ↓ |
| Scherer et al., | Pseudo-sentences | ↑ | ↓ |
| Belin et al., | Affect bursts | ↑ | ↓ |
| Demenescu et al., | Pseudo-words | ↑ | ↓ |
| Toivanen et al., | Lexical and neutral sentences | ↑ | ↓ |
| Hawk et al., | Affect bursts and three-digit numbers | ||
| Raithel and Hielscher-Fastabend, | Lexical and neutral sentences | ||
| Bonebright et al., | Short stories | ↑ | ↓ | ↑ | ↓ | ↑ | ↓ | ||||||||
| Fujisawa and Shinohara, | Words | ↑ | ↓ | ↑ | ↓ | ||||||||||
| Lambrecht et al., | Words | ↑ | ↓ | ↑ | ↓ | ||||||||||
| Demenescu et al., | Pseudo-words | ↑ | ↓ | ↑ | ↓ | ||||||||||
| Zupan et al., | Neutral sentences and short stories | ↑ | ↓ | ↑ | ↓ | ||||||||||
| Scherer et al., | Pseudo-sentences | ↑ | ↓ |
| Belin et al., | Affect bursts | ↑ | ↓ |
| Riviello and Esposito, | Audio clips | ||
| Lambrecht et al., | Words | ||
| Bonebright et al., | Short stories | ↑ | ↓ | ↑ | ↓ | ↑ | ↓ | ||||||||
| Pseudo-sentences (German) | ↓ | ↑ | ↑ | ↓ | |||||||||||
| Pell et al., | Pseudo-sentences (English) | ↑ | ↓ | ↓ | ↑ | ↓ | ↑ | ↑ | ↓ | ||||||
| Pseudo-sentences (Arabic) | ↑ | ↓ | |||||||||||||
| Collignon et al., | Affect bursts | ↑ | ↓ | ↑ | ↓ | ||||||||||
Ha, Happy; An, Angry; Di, Disgust; Fe, Fear; Sa, Sad; Su, Surprise; Ne, Neutral. The shades indicate the absence of emotions; n.s., not significant; n.r., not reported;↑/↓, better/lower performance of decoding vocal emotions by listeners' gender (.