Literature DB >> 23757552

Affective State Level Recognition in Naturalistic Facial and Vocal Expressions.

Hongying Meng, Nadia Bianchi-Berthouze.   

Abstract

Naturalistic affective expressions change at a rate much slower than the typical rate at which video or audio is recorded. This increases the probability that consecutive recorded instants of expressions represent the same affective content. In this paper, we exploit such a relationship to improve the recognition performance of continuous naturalistic affective expressions. Using datasets of naturalistic affective expressions (AVEC 2011 audio and video dataset, PAINFUL video dataset) continuously labeled over time and over different dimensions, we analyze the transitions between levels of those dimensions (e.g., transitions in pain intensity level). We use an information theory approach to show that the transitions occur very slowly and hence suggest modeling them as first-order Markov models. The dimension levels are considered to be the hidden states in the Hidden Markov Model (HMM) framework. Their discrete transition and emission matrices are trained by using the labels provided with the training set. The recognition problem is converted into a best path-finding problem to obtain the best hidden states sequence in HMMs. This is a key difference from previous use of HMMs as classifiers. Modeling of the transitions between dimension levels is integrated in a multistage approach, where the first level performs a mapping between the affective expression features and a soft decision value (e.g., an affective dimension level), and further classification stages are modeled as HMMs that refine that mapping by taking into account the temporal relationships between the output decision labels. The experimental results for each of the unimodal datasets show overall performance to be significantly above that of a standard classification system that does not take into account temporal relationships. In particular, the results on the AVEC 2011 audio dataset outperform all other systems presented at the international competition.

Entities:  

Mesh:

Year:  2013        PMID: 23757552     DOI: 10.1109/TCYB.2013.2253768

Source DB:  PubMed          Journal:  IEEE Trans Cybern        ISSN: 2168-2267            Impact factor:   11.448


  5 in total

1.  The Automatic Detection of Chronic Pain-Related Expression: Requirements, Challenges and the Multimodal EmoPain Dataset.

Authors:  Min S H Aung; Sebastian Kaltwang; Bernardino Romera-Paredes; Brais Martinez; Aneesha Singh; Matteo Cella; Michel Valstar; Hongying Meng; Andrew Kemp; Moshen Shafizadeh; Aaron C Elkins; Natalie Kanakam; Amschel de Rothschild; Nick Tyler; Paul J Watson; Amanda C de C Williams; Maja Pantic; Nadia Bianchi-Berthouze
Journal:  IEEE Trans Affect Comput       Date:  2015-07-30       Impact factor: 10.506

2.  A Comparison of Machine Learning Algorithms and Feature Sets for Automatic Vocal Emotion Recognition in Speech.

Authors:  Cem Doğdu; Thomas Kessler; Dana Schneider; Maha Shadaydeh; Stefan R Schweinberger
Journal:  Sensors (Basel)       Date:  2022-10-06       Impact factor: 3.847

Review 3.  Machine learning in pain research.

Authors:  Jörn Lötsch; Alfred Ultsch
Journal:  Pain       Date:  2018-04       Impact factor: 6.961

4.  The Human Touch: Using a Webcam to Autonomously Monitor Compliance During Visual Field Assessments.

Authors:  Pete R Jones; Giorgia Demaria; Iris Tigchelaar; Daniel S Asfaw; David F Edgar; Peter Campbell; Tamsin Callaghan; David P Crabb
Journal:  Transl Vis Sci Technol       Date:  2020-07-20       Impact factor: 3.283

5.  Sit still and pay attention: Using the Wii Balance-Board to detect lapses in concentration in children during psychophysical testing.

Authors:  Pete R Jones
Journal:  Behav Res Methods       Date:  2019-02
  5 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.