Literature DB >> 31153307

The time course of emotion recognition in speech and music.

Henrik Nordström1, Petri Laukka1.   

Abstract

The auditory gating paradigm was adopted to study how much acoustic information is needed to recognize emotions from speech prosody and music performances. In Study 1, brief utterances conveying ten emotions were segmented into temporally fine-grained gates and presented to listeners, whereas Study 2 instead used musically expressed emotions. Emotion recognition accuracy increased with increasing gate duration and generally stabilized after a certain duration, with different trajectories for different emotions. Above-chance accuracy was observed for ≤100 ms stimuli for anger, happiness, neutral, and sadness, and for ≤250 ms stimuli for most other emotions, for both speech and music. This suggests that emotion recognition is a fast process that allows discrimination of several emotions based on low-level physical characteristics. The emotion identification points, which reflect the amount of information required for stable recognition, were shortest for anger and happiness for both speech and music, but recognition took longer to stabilize for music vs speech. This, in turn, suggests that acoustic cues that develop over time also play a role for emotion inferences (especially for music). Finally, acoustic cue patterns were positively correlated between speech and music, suggesting a shared acoustic code for expressing emotions.

Year:  2019        PMID: 31153307     DOI: 10.1121/1.5108601

Source DB:  PubMed          Journal:  J Acoust Soc Am        ISSN: 0001-4966            Impact factor:   1.840


  5 in total

1.  Algorithm Composition and Emotion Recognition Based on Machine Learning.

Authors:  Jiao He
Journal:  Comput Intell Neurosci       Date:  2022-06-06

2.  Recognition of Emotion According to the Physical Elements of the Video.

Authors:  Jing Zhang; Xingyu Wen; Mincheol Whang
Journal:  Sensors (Basel)       Date:  2020-01-24       Impact factor: 3.576

3.  A Music Emotion Classification Model Based on the Improved Convolutional Neural Network.

Authors:  Xiaosong Jia
Journal:  Comput Intell Neurosci       Date:  2022-02-14

4.  Practice and Exploration of Music Solfeggio Teaching Based on Data Mining Technology.

Authors:  Wenfeng Zhang
Journal:  J Environ Public Health       Date:  2022-08-16

5.  An Exploratory Study on the Acoustic Musical Properties to Decrease Self-Perceived Anxiety.

Authors:  Emilia Parada-Cabaleiro; Anton Batliner; Markus Schedl
Journal:  Int J Environ Res Public Health       Date:  2022-01-16       Impact factor: 3.390

  5 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.