| Literature DB >> 31551857 |
Chloe MacGregor1, Daniel Müllensiefen1.
Abstract
Previous research has shown that levels of musical training and emotional engagement with music are associated with an individual's ability to decode the intended emotional expression from a music performance. The present study aimed to assess traits and abilities that might influence emotion recognition, and to create a new test of emotion discrimination ability. The first experiment investigated musical features that influenced the difficulty of the stimulus items (length, type of melody, instrument, target-/comparison emotion) to inform the creation of a short test of emotion discrimination. The second experiment assessed the contribution of individual differences measures of emotional and musical abilities as well as psychoacoustic abilities. Finally, the third experiment established the validity of the new test against other measures currently used to assess similar abilities. Performance on the Musical Emotion Discrimination Task (MEDT) was significantly associated with high levels of self-reported emotional engagement with music as well as with performance on a facial emotion recognition task. Results are discussed in the context of a process model for emotion discrimination in music and psychometric properties of the MEDT are provided. The MEDT is freely available for research use.Entities:
Keywords: emotion perception; emotional intelligence; music perception; music performance; musical training
Year: 2019 PMID: 31551857 PMCID: PMC6736617 DOI: 10.3389/fpsyg.2019.01955
Source DB: PubMed Journal: Front Psychol ISSN: 1664-1078
FIGURE 1A diagram to illustrate the cognitive model proposed to underlie emotion recognition in music as relevant to the testing paradigm of the MEDT. The rectangles reflect covert processes that cannot easily be directly measured or controlled, while the parallelograms represent processes that can be manipulated and studied.
FIGURE 2A diagram displaying the contribution of individual differences (in circles) at different stages of a cognitive model proposed to underlie emotion recognition in music. The diamond shapes highlighted in purple represent cognitive mechanisms thought to underlie the operation of particular processes.
FIGURE 3Notation of melody B (1).
FIGURE 4Notation of melody C (2).
Stimulus properties of the melodies from Akkermans et al. (2018) employed in the current study.
| Pi | A | B | 205 | 16 |
| Pi | A | C | 132 | 19 |
| Pi | H | B | 215 | 16 |
| Pi | H | C | 117 | 19 |
| Pi | S | B | 93 | 32 |
| Pi | S | C | 54 | 39 |
| Pi | T | B | 96 | 32 |
| Pi | T | C | 51 | 41 |
| Vi | A | B | 343 | 9 |
| Vi | A | C | 191 | 12 |
| Vi | H | B | 292 | 11 |
| Vi | H | C | 167 | 14 |
| Vi | S | B | 84 | 34 |
| Vi | S | C | 88 | 25 |
| Vi | T | B | 113 | 26 |
| Vi | T | C | 121 | 17 |
| Vx | A | B | 225 | 15 |
| Vx | A | C | 117 | 20 |
| Vx | H | B | 179 | 18 |
| Vx | H | C | 125 | 18 |
| Vx | S | B | 109 | 38 |
| Vx | S | C | 64 | 40 |
| Vx | T | B | 97 | 36 |
| Vx | T | C | 65 | 34 |
Regression model with MEDT scores as dependent variable (N = 99).
| Constant | 24.04 | 12.82 | 0.06 | |
| Emotional intelligence (EI) | 0.44 | 0.35 | 0.17 | 0.21 |
| Emotional contagion (EC) | 0.02 | 0.02 | 0.1 | 0.36 |
| Musical training (MT) | 0.013 | 0.02 | 0.07 | 0.53 |
| Emotional music skills (EMS) | 0.06 | 1.38 | 0.17 | 0.14 |
| Pitch discrimination | 0 | 0.04 | –0.03 | 0.79 |
| Duration discrimination | –0.02 | 0.01 | –0.12 | 0.23 |
| Depression scores | 0 | 0.05 | 0 | 0.97 |
Regression model with MEDT sum scores as dependent variable using backward elimination of predictor variables.
| Constant | 16.28 | 1.49 | <0.001 | |
| Emotional intelligence (EI) | 0.5 | 0.26 | 0.19 | 0.06 |
| Emotional music skills (EMS) | 0.09 | 0.04 | 0.23 | 0.03 |
Descriptive statistics from experiment 2 (N = 99).
| MEDT score | 21.55 | 2 | 17–26 |
| Emotional intelligence (EI) | 4.8 | 0.77 | 2.8–6.3 |
| Emotional contagion (EC) | 50.24 | 9.11 | 29–70 |
| Musical training (MT) | 21.16 | 10.69 | 7–46 |
| Active engagement (AE) | 38.94 | 12.06 | 10–62 |
| Emotional music skills (EMS) | 33.27 | 5.43 | 14–42 |
| Pitch discrimination | 335.03 | 5.84 | 330.76–365.31 Hz |
| Duration discrimination | 279 | 13.81 | 256.85–330.03 ms |
Matrix displaying Pearson’s r correlations (one-tailed, p = 0.007, Bonferroni corrected) between MEDT score and individual difference measures (N = 99).
| MEDT score | − | ||||||
| EI | 0.26∗ | − | |||||
| EC | 0.2 | 0.34∗ | − | ||||
| MT | 0.19 | 0.13 | 0.19 | − | |||
| AE | 0.22 | 0.18 | 0.27 | 0.58∗∗ | − | ||
| EMS | 0.28∗ | 0.31∗ | 0.32∗ | 0.41∗∗ | 0.76∗∗ | − | |
| Pitch | –0.14 | –0.14 | –0.09 | −0.26∗ | −0.3∗ | −0.27∗ | − |
| Duration | –0.08 | 0.03 | 0.21 | –0.001 | 0.029 | 0.09 | 0.07 |
Stimulus properties of the 18 items featured in the final version of the MEDT.
| Pi | A | 205 | H | 215 | 10 | 0.2 |
| Pi | A | 205 | S | 93 | 16 | 3.05 |
| Pi | A | 205 | T | 96 | 16 | 2.77 |
| Pi | H | 215 | A | 205 | 10 | –3.16 |
| Pi | H | 215 | S | 93 | 14 | 4.41 |
| Pi | H | 215 | T | 96 | 15 | 3.05 |
| Pi | T | 96 | H | 215 | 16 | 3 |
| Vi | A | 343 | H | 292 | 8 | 1.99 |
| Vi | A | 343 | T | 113 | 12 | 2.48 |
| Vi | H | 292 | T | 113 | 13 | 4.43 |
| Vi | S | 84 | A | 343 | 15 | 5.3 |
| Vi | T | 113 | H | 292 | 12 | 3.38 |
| Vx | A | 225 | T | 97 | 15 | 5.23 |
| Vx | H | 179 | S | 109 | 15 | 2.87 |
| Vx | H | 179 | S | 109 | 17 | 3.4 |
| Vx | H | 179 | T | 97 | 18 | 2.39 |
| Vx | H | 179 | T | 97 | 16 | 1.88 |
| Vx | H | 179 | A | 225 | 10 | 4.48 |
Descriptive statistics from experiment 3.
| Accuracy score ( | 83.4 | 0.12 | 44–100 |
| IRT score ( | 0.02 | 0.54 | −1.46–1.57 |
| Alexithymia ( | 50.06 | 10.53 | 23–77 |
| Facial recognition ( | 65.66 | 10.91 | 20–87 |
| Vocal recognition ( | 60.51 | 9.77 | 37–80 |
| Musical training ( | 18.59 | 9 | 7–44 |
| Emotional music skills ( | 31.45 | 5.33 | 13–42 |
Matrix displaying Pearson’s r correlations (one-tailed, p = 0.008, Bonferroni corrected) between the two MEDT scores and the five measures of emotion processing ability and musical expertise.
| Accuracy score | − | |||||
| IRT score | 0.8∗∗(150) | − | ||||
| Alexithymia | −0.07(107) | −0.02(107) | − | |||
| Facial recognition | 0.44∗∗(53) | 0.32(53) | −0.22(51) | − | ||
| Vocal recognition | 0.21(53) | 0.13(53) | −0.3(51) | 0.43∗(53) | – | |
| Musical training (MT) | 0.11(140) | 0.15(140) | 0.05(103) | −0.07(52) | −0.17 (52) | |
| Emotional music skills (EMS) | 0.33∗∗(140) | 0.33∗∗(140) | −0.22(103) | −0.05(52) | 0.29 (52) | 0.36∗∗ (144) |
Regression model with MEDT accuracy scores as dependent variable (N = 150).
| Constant | 0.17 | 0.12 | 1.42 | 1.41 | 0.16 |
| Alexithymia | 0 | 0 | 0.12 | 1.17 | 0.24 |
| Facial recognition | 0.01 | 0 | 0.59 | 5.34 | <0.001 |
| Vocal recognition | 0 | 0 | −0.07 | −0.43 | 0.67 |
| Musical training (MT) | 0 | 0 | 0.01 | 0.147 | 0.88 |
| Emotional music skills (EMS) | 0.01 | 0 | 0.36 | 3.3 | 0.001 |
FIGURE 5An illustrative model of musical emotion decoding informed by the results of the current study.