Literature DB >> 32201866

Mutual Correlation Attentive Factors in Dyadic Fusion Networks for Speech Emotion Recognition.

Yue Gu1, Xinyu Lyu1, Weijia Sun1, Weitian Li1, Shuhong Chen1, Xinyu Li2, Marsic Ivan1.   

Abstract

Emotion recognition in dyadic communication is challenging because: 1. Extracting informative modality-specific representations requires disparate feature extractor designs due to the heterogenous input data formats. 2. How to effectively and efficiently fuse unimodal features and learn associations between dyadic utterances are critical to the model generalization in actual scenario. 3. Disagreeing annotations prevent previous approaches from precisely predicting emotions in context. To address the above issues, we propose an efficient dyadic fusion network that only relies on an attention mechanism to select representative vectors, fuse modality-specific features, and learn the sequence information. Our approach has three distinct characteristics: 1. Instead of using a recurrent neural network to extract temporal associations as in most previous research, we introduce multiple sub-view attention layers to compute the relevant dependencies among sequential utterances; this significantly improves model efficiency. 2. To improve fusion performance, we design a learnable mutual correlation factor inside each attention layer to compute associations across different modalities. 3. To overcome the label disagreement issue, we embed the labels from all annotators into a k-dimensional vector and transform the categorical problem into a regression problem; this method provides more accurate annotation information and fully uses the entire dataset. We evaluate the proposed model on two published multimodal emotion recognition datasets: IEMOCAP and MELD. Our model significantly outperforms previous state-of-the-art research by 3.8%-7.5% accuracy, using a more efficient model.

Entities:  

Keywords:  Attention Mechanism; Dyadic Communication; Multimodal Fusion Network; Mutual Correlation Attentive Factor; Speech Emotion Recognition

Year:  2019        PMID: 32201866      PMCID: PMC7085887          DOI: 10.1145/3343031.3351039

Source DB:  PubMed          Journal:  Proc ACM Int Conf Multimed


  1 in total

1.  CMU-MOSEAS: A Multimodal Language Dataset for Spanish, Portuguese, German and French.

Authors:  Amir Zadeh; Yan Sheng Cao; Simon Hessner; Paul Pu Liang; Soujanya Poria; Louis-Philippe Morency
Journal:  Proc Conf Empir Methods Nat Lang Process       Date:  2020-11
  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.