Literature DB >> 32219010

Words Can Shift: Dynamically Adjusting Word Representations Using Nonverbal Behaviors.

Yansen Wang1, Ying Shen2, Zhun Liu2, Paul Pu Liang2, Amir Zadeh2, Louis-Philippe Morency2.   

Abstract

Humans convey their intentions through the usage of both verbal and nonverbal behaviors during face-to-face communication. Speaker intentions often vary dynamically depending on different nonverbal contexts, such as vocal patterns and facial expressions. As a result, when modeling human language, it is essential to not only consider the literal meaning of the words but also the nonverbal contexts in which these words appear. To better model human language, we first model expressive nonverbal representations by analyzing the fine-grained visual and acoustic patterns that occur during word segments. In addition, we seek to capture the dynamic nature of nonverbal intents by shifting word representations based on the accompanying nonverbal behaviors. To this end, we propose the Recurrent Attended Variation Embedding Network (RAVEN) that models the fine-grained structure of nonverbal subword sequences and dynamically shifts word representations based on nonverbal cues. Our proposed model achieves competitive performance on two publicly available datasets for multimodal sentiment analysis and emotion recognition. We also visualize the shifted word representations in different nonverbal contexts and summarize common patterns regarding multimodal variations of word representations.

Entities:  

Year:  2019        PMID: 32219010      PMCID: PMC7098710     

Source DB:  PubMed          Journal:  Proc Conf AAAI Artif Intell        ISSN: 2159-5399


  3 in total

1.  Multimodal Affective Analysis Using Hierarchical Attention Strategy with Word-Level Alignment.

Authors:  Kangning Yang; Shiyu Fu; Yue Gu; Shuhong Chen; Xinyu Li; Ivan Marsic
Journal:  Proc Conf Assoc Comput Linguist Meet       Date:  2018-07

2.  Long short-term memory.

Authors:  S Hochreiter; J Schmidhuber
Journal:  Neural Comput       Date:  1997-11-15       Impact factor: 2.026

3.  Multi-attention Recurrent Network for Human Communication Comprehension.

Authors:  Amir Zadeh; Paul Pu Liang; Soujanya Poria; Prateek Vij; Erik Cambria; Louis-Philippe Morency
Journal:  Proc Conf AAAI Artif Intell       Date:  2018-02
  3 in total
  8 in total

1.  Integrating Multimodal Information in Large Pretrained Transformers.

Authors:  Wasifur Rahman; Md Kamrul Hasan; Sangwu Lee; Amir Zadeh; Chengfeng Mao; Louis-Philippe Morency; Ehsan Hoque
Journal:  Proc Conf Assoc Comput Linguist Meet       Date:  2020-07

2.  Multimodal Routing: Improving Local and Global Interpretability of Multimodal Language Analysis.

Authors:  Yao-Hung Hubert Tsai; Martin Q Ma; Muqiao Yang; Ruslan Salakhutdinov; Louis-Philippe Morency
Journal:  Proc Conf Empir Methods Nat Lang Process       Date:  2020-11

3.  Multimodal Sentiment Analysis Based on Cross-Modal Attention and Gated Cyclic Hierarchical Fusion Networks.

Authors:  Zhibang Quan; Tao Sun; Mengli Su; Jishu Wei
Journal:  Comput Intell Neurosci       Date:  2022-08-09

4.  Multimodal Deep Learning Models for Detecting Dementia From Speech and Transcripts.

Authors:  Loukas Ilias; Dimitris Askounis
Journal:  Front Aging Neurosci       Date:  2022-03-17       Impact factor: 5.750

5.  LGCCT: A Light Gated and Crossed Complementation Transformer for Multimodal Speech Emotion Recognition.

Authors:  Feng Liu; Si-Yuan Shen; Zi-Wang Fu; Han-Yang Wang; Ai-Min Zhou; Jia-Yin Qi
Journal:  Entropy (Basel)       Date:  2022-07-21       Impact factor: 2.738

6.  Sentiment Analysis and Emotion Recognition from Speech Using Universal Speech Representations.

Authors:  Bagus Tris Atmaja; Akira Sasou
Journal:  Sensors (Basel)       Date:  2022-08-24       Impact factor: 3.847

7.  AFR-BERT: Attention-based mechanism feature relevance fusion multimodal sentiment analysis model.

Authors:  Ji Mingyu; Zhou Jiawei; Wei Ning
Journal:  PLoS One       Date:  2022-09-09       Impact factor: 3.752

8.  Dynamic Invariant-Specific Representation Fusion Network for Multimodal Sentiment Analysis.

Authors:  Jing He; Haonan Yanga; Changfan Zhang; Hongrun Chen; Yifu Xua
Journal:  Comput Intell Neurosci       Date:  2022-01-24
  8 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.