Literature DB >> 31095475

Context Based Emotion Recognition Using EMOTIC Dataset.

Ronak Kosti, Jose M Alvarez, Adria Recasens, Agata Lapedriza.   

Abstract

In our everyday lives and social interactions we often try to perceive the emotional states of people. There has been a lot of research in providing machines with a similar capacity of recognizing emotions. From a computer vision perspective, most of the previous efforts have been focusing in analyzing the facial expressions and, in some cases, also the body pose. Some of these methods work remarkably well in specific settings. However, their performance is limited in natural, unconstrained environments. Psychological studies show that the scene context, in addition to facial expression and body pose, provides important information to our perception of people's emotions. However, the processing of the context for automatic emotion recognition has not been explored in depth, partly due to the lack of proper data. In this paper we present EMOTIC, a dataset of images of people in a diverse set of natural situations, annotated with their apparent emotion. The EMOTIC dataset combines two different types of emotion representation: (1) a set of 26 discrete categories, and (2) the continuous dimensions Valence, Arousal, and Dominance. We also present a detailed statistical and algorithmic analysis of the dataset along with annotators' agreement analysis. Using the EMOTIC dataset we train different CNN models for emotion recognition, combining the information of the bounding box containing the person with the contextual information extracted from the scene. Our results show how scene context provides important information to automatically recognize emotional states and motivate further research in this direction.

Entities:  

Year:  2019        PMID: 31095475     DOI: 10.1109/TPAMI.2019.2916866

Source DB:  PubMed          Journal:  IEEE Trans Pattern Anal Mach Intell        ISSN: 0098-5589            Impact factor:   6.226


  1 in total

1.  Elder emotion classification through multimodal fusion of intermediate layers and cross-modal transfer learning.

Authors:  P Sreevidya; S Veni; O V Ramana Murthy
Journal:  Signal Image Video Process       Date:  2022-01-18       Impact factor: 1.583

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.