Literature DB >> 31581074

SEWA DB: A Rich Database for Audio-Visual Emotion and Sentiment Research in the Wild.

Jean Kossaifi, Robert Walecki, Yannis Panagakis, Jie Shen, Maximilian Schmitt, Fabien Ringeval, Jing Han, Vedhas Pandit, Antoine Toisoul, Bjorn Schuller, Kam Star, Elnar Hajiyev, Maja Pantic.   

Abstract

Natural human-computer interaction and audio-visual human behaviour sensing systems, which would achieve robust performance in-the-wild are more needed than ever as digital devices are increasingly becoming an indispensable part of our life. Accurately annotated real-world data are the crux in devising such systems. However, existing databases usually consider controlled settings, low demographic variability, and a single task. In this paper, we introduce the SEWA database of more than 2,000 minutes of audio-visual data of 398 people coming from six cultures, 50 percent female, and uniformly spanning the age range of 18 to 65 years old. Subjects were recorded in two different contexts: while watching adverts and while discussing adverts in a video chat. The database includes rich annotations of the recordings in terms of facial landmarks, facial action units (FAU), various vocalisations, mirroring, and continuously valued valence, arousal, liking, agreement, and prototypic examples of (dis)liking. This database aims to be an extremely valuable resource for researchers in affective computing and automatic human sensing and is expected to push forward the research in human behaviour analysis, including cultural studies. Along with the database, we provide extensive baseline experiments for automatic FAU detection and automatic valence, arousal, and (dis)liking intensity estimation.

Entities:  

Mesh:

Year:  2021        PMID: 31581074     DOI: 10.1109/TPAMI.2019.2944808

Source DB:  PubMed          Journal:  IEEE Trans Pattern Anal Mach Intell        ISSN: 0098-5589            Impact factor:   6.226


  4 in total

1.  Modeling emotion in complex stories: the Stanford Emotional Narratives Dataset.

Authors:  Desmond C Ong; Zhengxuan Wu; Tan Zhi-Xuan; Marianne Reddan; Isabella Kahhale; Alison Mattek; Jamil Zaki
Journal:  IEEE Trans Affect Comput       Date:  2019-11-26       Impact factor: 13.990

2.  Synchronization in Interpersonal Speech.

Authors:  Shahin Amiriparian; Jing Han; Maximilian Schmitt; Alice Baird; Adria Mallol-Ragolta; Manuel Milling; Maurice Gerczuk; Björn Schuller
Journal:  Front Robot AI       Date:  2019-11-08

Review 3.  Macro- and Micro-Expressions Facial Datasets: A Survey.

Authors:  Hajer Guerdelli; Claudio Ferrari; Walid Barhoumi; Haythem Ghazouani; Stefano Berretti
Journal:  Sensors (Basel)       Date:  2022-02-16       Impact factor: 3.576

4.  Cross-Language Speech Emotion Recognition Using Bag-of-Word Representations, Domain Adaptation, and Data Augmentation.

Authors:  Shruti Kshirsagar; Tiago H Falk
Journal:  Sensors (Basel)       Date:  2022-08-26       Impact factor: 3.847

  4 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.