| Literature DB >> 19391047 |
Julia C Hailstone1, Rohani Omar, Susie M D Henley, Chris Frost, Michael G Kenward, Jason D Warren.
Abstract
Salient sensory experiences often have a strong emotional tone, but the neuropsychological relations between perceptual characteristics of sensory objects and the affective information they convey remain poorly defined. Here we addressed the relationship between sound identity and emotional information using music. In two experiments, we investigated whether perception of emotions is influenced by altering the musical instrument on which the music is played, independently of other musical features. In the first experiment, 40 novel melodies each representing one of four emotions (happiness, sadness, fear, or anger) were each recorded on four different instruments (an electronic synthesizer, a piano, a violin, and a trumpet), controlling for melody, tempo, and loudness between instruments. Healthy participants (23 young adults aged 18-30 years, 24 older adults aged 58-75 years) were asked to select which emotion they thought each musical stimulus represented in a four-alternative forced-choice task. Using a generalized linear mixed model we found a significant interaction between instrument and emotion judgement with a similar pattern in young and older adults (p < .0001 for each age group). The effect was not attributable to musical expertise. In the second experiment using the same melodies and experimental design, the interaction between timbre and perceived emotion was replicated (p < .05) in another group of young adults for novel synthetic timbres designed to incorporate timbral cues to particular emotions. Our findings show that timbre (instrument identity) independently affects the perception of emotions in music after controlling for other acoustic, cognitive, and performance factors.Entities:
Mesh:
Year: 2009 PMID: 19391047 PMCID: PMC2683716 DOI: 10.1080/17470210902765957
Source DB: PubMed Journal: Q J Exp Psychol (Hove) ISSN: 1747-0218 Impact factor: 2.143
Structural characteristics of melodies
| Tonality | Mode | Major | Minor | Minor | Minor |
| No. chromatic notes | None | None | Few | Many | |
| Tempo (beats/minute) | Mean (SD) | 189 (31) | 86 (11) | 223 (32) | 151 (53) |
| Min:max | 156:257 | 68:100 | 182:274 | 64:245 | |
| Metre (% of stimuli) | 4/4 | 50 | 20 | 60 | 60 |
| 3/4 | 30 | 70 | 20 | 20 | |
| 2/4 | 10 | — | 10 | — | |
| Compound | 10 | 10 | 10 | 20 | |
| Dynamics | Range | Small range | Small range | Small range | Wide variation |
| At beginning of phrase | loud | soft | loud | variable | |
| At end of phrase | loud | soft | loud | variable | |
| No. accented notes | none | none | few | many | |
Figure 1.Notations for representative melodies exemplifying each target emotion.
Figure 2.Mean scores (/10) for each intended emotion for each “real” instrument in young (left) and older (right) adult participants. H, happiness; S, sadness; A, anger; F, fear.
Odds of a “correct” response relative to “happy” melodies played on piano for young and older adults in Experiment 1
| Young | Piano | 1 | 1.92 | 0.89, 4.12 | 0.15 | 0.08, 0.25 | 0.16 | 0.10, 0.28 | |
| Synthesizer | 1.01 | 0.52, 1.95 | 0.31 | 0.18, 0.54 | 0.17 | 0.10, 0.29 | 0.14 | 0.08, 0.24 | |
| Violin | 0.48 | 0.27, 0.86 | 2.14 | 0.97, 4.72 | 0.16 | 0.09, 0.27 | 0.13 | 0.08, 0.22 | |
| Trumpet | 1.04 | 0.54, 2.01 | 2.21 | 1.00, 4.87 | 0.14 | 0.08, 0.23 | 0.10 | 0.06, 0.16 | |
| Older | Piano | 1 | 1.63 | 0.96, 2.74 | 0.15 | 0.10, 0.23 | 0.16 | 0.10, 0.24 | |
| Synthesizer | 0.93 | 0.58, 1.50 | 0.57 | 0.36, 0.88 | 0.16 | 0.10, 0.24 | 0.21 | 0.13, 0.32 | |
| Violin | 0.62 | 0.40, 0.97 | 1.83 | 1.07, 3.13 | 0.18 | 0.12, 0.28 | 0.20 | 0.13, 0.31 | |
| Trumpet | 1.09 | 0.67, 1.77 | 2.12 | 1.22, 3.71 | 0.10 | 0.06, 0.15 | 0.17 | 0.11, 0.25 | |
Note: “Correct” = intended response; OR = odds ratio of an intended response; CI = 95% confidence intervals.
Characteristics of novel timbres for Experiment 2
| Spectral content | Strong middle and low frequencies. | Strong high frequencies. | Strong low frequencies. | Few harmonics, “notched” spectral envelope. |
| Temporal envelope | Slow attack and decay. | Fast attack, slow decay. | Slow attack, fast decay. | Fast attack and decay. |
| Vibrato rate and depth | None. | Fast and mid amplitude. | Slow and low amplitude. | Fast and high amplitude. |
Figure 3.Mean scores (/10) for each intended emotion for synthesized timbres 1–4, in young adult participants. H, happiness; S, sadness; A, anger; F, fear.
Odds of a “correct” response relative to “happy” melodies on Timbre 1 for young adults in Experiment 2
| 1 | 1 | 1.70 | (0.86, 3.36) | 0.14 | (0.08, 0.23) | 0.14 | (0.08, 0.24) | |
| 2 | 0.96 | (0.52, 1.75) | 0.91 | (0.50, 1.67) | 0.17 | (0.10, 0.28) | 0.13 | (0.08, 0.22) |
| 3 | 0.83 | (0.46, 1.51) | 1.58 | (0.80, 3.10) | 0.10 | (0.06, 0.17) | 0.18 | (0.11, 0.31) |
| 4 | 0.71 | (0.4, 1.26) | 1.10 | (0.59, 2.05) | 0.18 | (0.11, 0.30) | 0.14 | (0.08, 0.24) |
Note: “Correct” = intended response; OR = odds ratio of an intended response; CI = 95% confidence intervals.