| Literature DB >> 35385989 |
Rebecca A Smith1, Emily S Cross2,3.
Abstract
The ability to exchange affective cues with others plays a key role in our ability to create and maintain meaningful social relationships. We express our emotions through a variety of socially salient cues, including facial expressions, the voice, and body movement. While significant advances have been made in our understanding of verbal and facial communication, to date, understanding of the role played by human body movement in our social interactions remains incomplete. To this end, here we describe the creation and validation of a new set of emotionally expressive whole-body dance movement stimuli, named the Motion Capture Norming (McNorm) Library, which was designed to reconcile a number of limitations associated with previous movement stimuli. This library comprises a series of point-light representations of a dancer's movements, which were performed to communicate to observers neutrality, happiness, sadness, anger, and fear. Based on results from two validation experiments, participants could reliably discriminate the intended emotion expressed in the clips in this stimulus set, with accuracy rates up to 60% (chance = 20%). We further explored the impact of dance experience and trait empathy on emotion recognition and found that neither significantly impacted emotion discrimination. As all materials for presenting and analysing this movement library are openly available, we hope this resource will aid other researchers in further exploration of affective communication expressed by human bodily movement.Entities:
Year: 2022 PMID: 35385989 PMCID: PMC8985749 DOI: 10.1007/s00426-022-01669-9
Source DB: PubMed Journal: Psychol Res ISSN: 0340-0727
Demographics information for participants McNorm Experiment 1 (N = 50)
| Age | Mean (SD) | 29.06 (11.8) |
| Range | 18–63 | |
| Gender | Female | 32 |
| Male | 15 | |
| Transgender female | 1 | |
| Transgender male | 2 | |
| Gender variant/non-conforming | NA | |
| Dance level | Non-dancer | 13 |
| (self-reported) | Beginner | 16 |
| Intermediate | 15 | |
| Advanced | 4 | |
| Professional | 2 |
Fig. 1Placement of retroreflective markers on the body during the recording phase are denoted in black. The Plug-In Gait template in Vicon Nexus was used to convert the original 39 markers to point-light display figures (the final markers are denoted in red)
A summary of recognition rates for each emotion category (means and standard deviations), and the results of inferential tests performed on the data to determine whether they were recognised at greater than chance level
| Emotion | Normally distributed? | Average recognition rate (%) | Greater than chance? | Test | Results |
|---|---|---|---|---|---|
| Neutral | Yes | 28.13% (SD = 13.21) | Yes | One-sample | |
| Happy | No | 29.73% (SD = 10.1) | Yes | One-sample Wilcoxon signed rank test | Z = 713, p < 0.001, |
| Sad | Yes | 32.67% (SD = 10.28) | Yes | One-sample t test | |
| Angry | No | 26% (SD = 10.78) | Yes | One-sample Wilcoxon signed rank test | Z = 1023, p < 0.001, |
| Fearful | No | 17.29% (SD = 11.18) | No | One-sample Wilcoxon signed rank test | Z = 465, p = 0.095, |
Fig. 2Violin plots depicting the distribution of recognition rates for each emotion category. Values presented in the centre of each of the violins represent the mean recognition rate, and the red dotted line indicates chance level of recognition. All emotions, with the exception of fear, were recognised at greater than chance level
Correct classification and misclassification rates for each emotion category in Experiment 1
| Perceived emotion | ||||||
|---|---|---|---|---|---|---|
| Neutral | Happy | Sad | Angry | Fearful | ||
| Intended emotion | Neutral | 26.34% | 19.49% | 11.56% | 14.25% | |
| Happy | 21.10% | 21.1% | 17.47% | 10.35% | ||
| Sad | 19.41% | 20.49% | 14.56% | 12.53% | ||
| Angry | 23.45% | 27.63% | 10.79% | 11.65% | ||
| Fearful | 23.21% | 21.49% | 23.21% | 14.76% | ||
Bold values represent the correct classification rates
Summary of hierarchical multiple regression analysis for factors predicting recognition accuracy in McNorm Experiment 1
| Model | Summary | Predictor | SE | ||
|---|---|---|---|---|---|
| Model 1 | Intercept | 27.366 | |||
| Physical Experience | −1.162n.s | −0.606 | 1.916 | ||
| Model 2 | Intercept | 26.605 | |||
| Physical experience | −2.357n.s | −1.006 | 2.343 | ||
| Observational experience | 2.385n.s | 0.891 | 2.678 | ||
| Model 3 | Intercept | 26.642 | |||
| Physical experience | −1.965n.s | −0.577 | 3.406 | ||
| Observational experience | 2.416 n.s | 0.891 | 2.712 | ||
| Number of Styles | −0.438n.s | −0.160 | 2.734 |
N = 50; n.s. p > 0.05, *p < 0.05, **p < 0.01, ***p < 0.001.
Model 1: percentage recognition − physical dance experience factor
Model 2: percentage recognition − physical dance experience factor + observational dance experience factor
Model 3: percentage recognition − physical dance experience factor + observational dance experience factor + number of dance styles a participant has experience with
Average intensity ratings for clips in each emotion category in Experiment 1
| Neutral | Happy | Sad | Angry | Fearful | |
|---|---|---|---|---|---|
| Mean intensity score | 54.36 (± 12.32) | 57.44 (± 11.72) | 57.54 (± 11.1) | 59.02 (± 11.33) | 55.18 (± 10.6) |
Average certainty ratings for clips in each emotion category in Experiment 1
| Neutral | Happy | Sad | Angry | Fearful | |
|---|---|---|---|---|---|
| Mean certainty score | 52.97 (± 14.31) | 55.9 (± 15.79) | 53.09 (± 13.85) | 55.89 (± 14.94) | 52.76 (± 13.74) |
Standard deviations are listed in italics within the parentheses below each average rating
Fig. 3Violin plots depicting the distribution of recognition rates for each emotion category for ONLY the 20 clips from the full McNorm library identified for future study. Values presented in the centre of each of the violins represent the mean recognition rate, and the red dotted line indicates chance level of recognition. All emotion categories were recognised at greater than chance level for this subset of clips
A summary of descriptive and inferential statistics for the subset of 20 clips from the full McNorm library identified for future study
| Emotion | Normally distributed? | Average recognition rate (%) | Greater than chance? | Test | Results |
|---|---|---|---|---|---|
| Neutral | No | 57.5% (SD = | Yes | One-sample Wilcoxon Signed Rank Test | Z = 1227, p < 0.001, |
| Happy | No | 53.5% (SD = 22.02) | Yes | One-sample Wilcoxon Signed Rank Test | Z = 1275, p < 0.001, |
| Sad | No | 60% (SD = 25.75) | Yes | One-sample Wilcoxon Signed Rank Test | Z = 1256, p < 0.001, |
| Angry | No | 52% (SD = 23.6) | Yes | One-sample Wilcoxon Signed Rank Test | Z = 1248, p < 0.001, |
| Fearful | No | 38% (SD = 29.55) | Yes | One-sample Wilcoxon Signed Rank Test | Z = 1054, p < 0.001, |
Correct classification and misclassification rates for each emotion category for the subset of 20 clips from the McNorm library identified for further study
| Perceived emotion | ||||||
|---|---|---|---|---|---|---|
| Neutral | Happy | Sad | Angry | Fearful | ||
| Intended emotion | Neutral | 16.08% | 11.56% | 4.02% | 10.55% | |
| Happy | 7.07% | 8.59% | 27.27% | 3.03% | ||
| Sad | 19.29% | 5.08% | 3.55% | 11.17% | ||
| Angry | 5.53% | 31.66% | 4.02% | 6.53% | ||
| Fearful | 6.5% | 14% | 23% | 18.5% | ||
Bold values represent the correct classification rates
Average intensity ratings for each emotion category for the subset of 20 clips from the full McNorm movement library
| Neutral | Happy | Sad | Angry | Fearful | |
|---|---|---|---|---|---|
| Mean intensity score | 46.94 (± 18.02) | 66.08 (± 15.61) | 58.81 (± 14.59) | 72 (± 12.78) | 58.01 (± 14.29) |
Average certainty ratings for each emotion category for the subset of 20 clips from the full McNorm movement library
| Neutral | Happy | Sad | Angry | Fearful | |
|---|---|---|---|---|---|
| Mean certainty score | 54.19 (± 18.68) | 60.96 (± 16.85) | 54.22 (± 16.66) | 60.44 (± 17.78) | 53 (± 17.67) |
Demographics information for participants who took part in the second McNorm Experiment (N = 77)
| Age | Mean (SD) | 31.96 (13.09) |
| Range | 20–66 | |
| Gender | Female | 66 |
| Male | 10 | |
| Transgender female | NA | |
| Transgender male | NA | |
| Gender variant/non-conforming | NA | |
| Missing | 1 | |
| Dance level | Non-Dancer | 24 |
| (self-reported) | Beginner | 29 |
| Intermediate | 18 | |
| Advanced | 5 | |
| Professional | NA |
A summary of recognition rates for each emotion category (means and standard deviations), and the results of inferential tests performed on the data to determine whether they were recognised at greater than chance level for the second experiment
| Emotion | Normally distributed? | Average recognition rate (%) | Greater than chance? | Test | Results |
|---|---|---|---|---|---|
| Neutral | No | 58.49% (SD = 30.99) | Yes | One-sample Wilcoxon Signed Rank Test | |
| Happy | No | 47.17% (SD = 28.02) | Yes | One-sample Wilcoxon Signed Rank Test | |
| Sad | No | 58.02% (SD = 22.88) | Yes | One-sample Wilcoxon Signed Rank Test | |
| Angry | No | 41.98% (SD = 22.88) | Yes | One-sample Wilcoxon Signed Rank Test | |
| Fearful | No | 39.15% (SD = 29.22) | Yes | One-sample Wilcoxon Signed Rank Test |
Fig. 4Violin plots depicting the distribution of recognition rates for each emotion category in the second experiment. Values presented in the centre of each of the violins represent the mean recognition rate, and the red dotted line indicates chance level of recognition. All emotion categories were recognised at greater than chance level
Correct classification and misclassification rates for each emotion category in Experiment 2
| Perceived emotion | ||||||
|---|---|---|---|---|---|---|
| Neutral | Happy | Sad | Angry | Fearful | ||
| Intended emotion | Neutral | 12.74% | 11.79% | 4.25% | 12.74% | |
| Happy | 15.57% | 8.96% | 24.06% | 4.25% | ||
| Sad | 13.68% | 7.55% | 1.42% | 19.34% | ||
| Angry | 8.02% | 42.45% | 2.83% | 4.72% | ||
| Fearful | 15.57% | 8.96% | 25.47% | 10.85% | ||
Bold values represent the correct classification rates
Summary of hierarchical multiple regression analysis for factors predicting recognition accuracy in the second experiment
| Model | Summary | Predictor | SE | ||
|---|---|---|---|---|---|
| Model 1 | Intercept | 49.340 | |||
| Physical Experience | −1.154n.s | −0.176 | 6.550 | ||
| Model 2 | Intercept | 54.897 | |||
| Physical experience | 8.434n.s | 1.217 | 6.928 | ||
| Observational experience | −24.626** | −2.935 | 8.391 | ||
| Model 3 | Intercept | 54.974 | |||
| Physical experience | 10.214n.s | 1.183 | 8.633 | ||
| Observational experience | −23.909** | −2.746 | 8.708 | ||
| Number of styles | −2.331n.s | −0.351 | 6.636 |
N = 50; n.s. p > 0.05, *p < 0.05, **p < 0.01, ***p < 0.001
Model 1: Percentage recognition − physical dance experience factor
Model 2: Percentage recognition − physical dance experience factor + observational dance experience factor
Model 3: Percentage recognition − physical dance experience factor + observational dance experience factor + number of dance styles a participant has experience with
Average intensity ratings for clips in each emotion category in Experiment 2
| Neutral | Happy | Sad | Angry | Fearful | |
|---|---|---|---|---|---|
| Mean intensity score | 46.68 (± 16.65) | 61.48 (± 14.06) | 53.48 (± 15.16) | 63.26 (± 15.13) | 54.19 (± 14.04) |
Average certainty ratings for clips in each emotion category in Experiment 2
| Neutral | Happy | Sad | Angry | Fearful | |
|---|---|---|---|---|---|
| Mean CERTAINTY score | 54.67 (± 14.18) | 55.34 (± 17.2) | 54.69 (± 15.67) | 53.11 (± 19.17) | 50.55 (± 16.99) |