Literature DB >> 28372068

A transfer learning framework for predicting the emotional content of generalized sound events.

Stavros Ntalampiras1.   

Abstract

Predicting the emotions evoked by generalized sound events is a relatively recent research domain which still needs attention. In this work a framework aiming to reveal potential similarities existing during the perception of emotions evoked by sound events and songs is presented. To this end the following are proposed: (a) the usage of temporal modulation features, (b) a transfer learning module based on an echo state network, and (c) a k-medoids clustering algorithm predicting valence and arousal measurements associated with generalized sound events. The effectiveness of the proposed solution is demonstrated after a thoroughly designed experimental phase employing both sound and music data. The results demonstrate the importance of transfer learning in the specific field and encourage further research on approaches which manage the problem in a synergistic way.

Mesh:

Year:  2017        PMID: 28372068     DOI: 10.1121/1.4977749

Source DB:  PubMed          Journal:  J Acoust Soc Am        ISSN: 0001-4966            Impact factor:   1.840


  2 in total

1.  Transfer Learning for Improved Audio-Based Human Activity Recognition.

Authors:  Stavros Ntalampiras; Ilyas Potamitis
Journal:  Biosensors (Basel)       Date:  2018-06-25

2.  Automatic Classification of Cat VocalizationsEmitted in Different Contexts.

Authors:  Stavros A Ntalampiras; Luca Andrea Ludovico; Giorgio Presti; Emanuela Prato Prato Previde; Monica Battini; Simona Cannas; Clara Palestrini; Silvana Mattiello
Journal:  Animals (Basel)       Date:  2019-08-09       Impact factor: 2.752

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.