Literature DB >> 33782629

Integrating Multimodal Information in Large Pretrained Transformers.

Wasifur Rahman1, Md Kamrul Hasan1, Sangwu Lee1, Amir Zadeh2, Chengfeng Mao2, Louis-Philippe Morency2, Ehsan Hoque1.   

Abstract

Recent Transformer-based contextual word representations, including BERT and XLNet, have shown state-of-the-art performance in multiple disciplines within NLP. Fine-tuning the trained contextual models on task-specific datasets has been the key to achieving superior performance downstream. While fine-tuning these pre-trained models is straight-forward for lexical applications (applications with only language modality), it is not trivial for multimodal language (a growing area in NLP focused on modeling face-to-face communication). Pre-trained models don't have the necessary components to accept two extra modalities of vision and acoustic. In this paper, we proposed an attachment to BERT and XLNet called Multimodal Adaptation Gate (MAG). MAG allows BERT and XLNet to accept multimodal nonverbal data during fine-tuning. It does so by generating a shift to internal representation of BERT and XLNet; a shift that is conditioned on the visual and acoustic modalities. In our experiments, we study the commonly used CMU-MOSI and CMU-MOSEI datasets for multimodal sentiment analysis. Fine-tuning MAG-BERT and MAG-XLNet significantly boosts the sentiment analysis performance over previous baselines as well as language-only fine-tuning of BERT and XLNet. On the CMU-MOSI dataset, MAG-XLNet achieves human-level multimodal sentiment analysis performance for the first time in the NLP community.

Entities:  

Year:  2020        PMID: 33782629      PMCID: PMC8005298          DOI: 10.18653/v1/2020.acl-main.214

Source DB:  PubMed          Journal:  Proc Conf Assoc Comput Linguist Meet        ISSN: 0736-587X


  4 in total

1.  Multimodal Transformer for Unaligned Multimodal Language Sequences.

Authors:  Yao-Hung Hubert Tsai; Shaojie Bai; Paul Pu Liang; J Zico Kolter; Louis-Philippe Morency; Ruslan Salakhutdinov
Journal:  Proc Conf Assoc Comput Linguist Meet       Date:  2019-07

2.  Conversational Memory Network for Emotion Recognition in Dyadic Dialogue Videos.

Authors:  Devamanyu Hazarika; Soujanya Poria; Amir Zadeh; Erik Cambria; Louis-Philippe Morency; Roger Zimmermann
Journal:  Proc Conf       Date:  2018-06

3.  Multi-attention Recurrent Network for Human Communication Comprehension.

Authors:  Amir Zadeh; Paul Pu Liang; Soujanya Poria; Prateek Vij; Erik Cambria; Louis-Philippe Morency
Journal:  Proc Conf AAAI Artif Intell       Date:  2018-02

4.  Words Can Shift: Dynamically Adjusting Word Representations Using Nonverbal Behaviors.

Authors:  Yansen Wang; Ying Shen; Zhun Liu; Paul Pu Liang; Amir Zadeh; Louis-Philippe Morency
Journal:  Proc Conf AAAI Artif Intell       Date:  2019-07
  4 in total
  4 in total

1.  Multimodal Sentiment Analysis Based on Cross-Modal Attention and Gated Cyclic Hierarchical Fusion Networks.

Authors:  Zhibang Quan; Tao Sun; Mengli Su; Jishu Wei
Journal:  Comput Intell Neurosci       Date:  2022-08-09

2.  Multimodal Deep Learning Models for Detecting Dementia From Speech and Transcripts.

Authors:  Loukas Ilias; Dimitris Askounis
Journal:  Front Aging Neurosci       Date:  2022-03-17       Impact factor: 5.750

3.  AFR-BERT: Attention-based mechanism feature relevance fusion multimodal sentiment analysis model.

Authors:  Ji Mingyu; Zhou Jiawei; Wei Ning
Journal:  PLoS One       Date:  2022-09-09       Impact factor: 3.752

4.  Emotion Recognition on Edge Devices: Training and Deployment.

Authors:  Vlad Pandelea; Edoardo Ragusa; Tommaso Apicella; Paolo Gastaldo; Erik Cambria
Journal:  Sensors (Basel)       Date:  2021-06-30       Impact factor: 3.576

  4 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.