Literature DB >> 30409082

Decoding emotions in expressive music performances: A multi-lab replication and extension study.

Jessica Akkermans1, Renee Schapiro1, Daniel Müllensiefen1, Kelly Jakubowski2, Daniel Shanahan3, David Baker3, Veronika Busch4, Kai Lothwesen4, Paul Elvers5, Timo Fischinger5, Kathrin Schlemmer6, Klaus Frieler7.   

Abstract

With over 560 citations reported on Google Scholar by April 2018, a publication by Juslin and Gabrielsson (1996) presented evidence supporting performers' abilities to communicate, with high accuracy, their intended emotional expressions in music to listeners. Though there have been related studies published on this topic, there has yet to be a direct replication of this paper. A replication is warranted given the paper's influence in the field and the implications of its results. The present experiment joins the recent replication effort by producing a five-lab replication using the original methodology. Expressive performances of seven emotions (e.g. happy, sad, angry, etc.) by professional musicians were recorded using the same three melodies from the original study. Participants (N = 319) were presented with recordings and rated how well each emotion matched the emotional quality using a 0-10 scale. The same instruments from the original study (i.e. violin, voice, and flute) were used, with the addition of piano. In an effort to increase the accessibility of the experiment and allow for a more ecologically-valid environment, the recordings were presented using an internet-based survey platform. As an extension to the original study, this experiment investigated how musicality, emotional intelligence, and emotional contagion might explain individual differences in the decoding process. Results found overall high decoding accuracy (57%) when using emotion ratings aggregated for the sample of participants, similar to the method of analysis from the original study. However, when decoding accuracy was scored for each participant individually the average accuracy was much lower (31%). Unlike in the original study, the voice was found to be the most expressive instrument. Generalised Linear Mixed Effects Regression modelling revealed that musical training and emotional engagement with music positively influences emotion decoding accuracy.

Entities:  

Keywords:  Emotion decoding; emotion study; expressive performance; musical training; replication

Mesh:

Year:  2018        PMID: 30409082     DOI: 10.1080/02699931.2018.1541312

Source DB:  PubMed          Journal:  Cogn Emot        ISSN: 0269-9931


  5 in total

1.  Lack of Emotional Experience, Resistance to Innovation, and Dissatisfied Musicians Influence on Music Unattractive Education.

Authors:  Dongjun Zhang; Shamim Akhter; Tribhuwan Kumar; Nhat Tan Nguyen
Journal:  Front Psychol       Date:  2022-06-10

2.  Design of the Piano Score Recommendation Image Analysis System Based on the Big Data and Convolutional Neural Network.

Authors:  Yuanyuan Zhang
Journal:  Comput Intell Neurosci       Date:  2021-11-26

3.  Multisensory integration of musical emotion perception in singing.

Authors:  Elke B Lange; Jens Fünderich; Hartmut Grimm
Journal:  Psychol Res       Date:  2022-01-10

Review 4.  Auditory affective processing, musicality, and the development of misophonic reactions.

Authors:  Solena D Mednicoff; Sivan Barashy; Destiny Gonzales; Stephen D Benning; Joel S Snyder; Erin E Hannon
Journal:  Front Neurosci       Date:  2022-09-23       Impact factor: 5.152

5.  Emotion and expertise: how listeners with formal music training use cues to perceive emotion.

Authors:  Aimee Battcock; Michael Schutz
Journal:  Psychol Res       Date:  2021-01-29
  5 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.