| Literature DB >> 30505068 |
Kangning Yang1, Shiyu Fu1, Yue Gu1, Shuhong Chen1, Xinyu Li1, Ivan Marsic1.
Abstract
Multimodal affective computing, learning to recognize and interpret human affect and subjective information from multiple data sources, is still challenging because:(i) it is hard to extract informative features to represent human affects from heterogeneous inputs; (ii) current fusion strategies only fuse different modalities at abstract levels, ignoring time-dependent interactions between modalities. Addressing such issues, we introduce a hierarchical multimodal architecture with attention and word-level fusion to classify utterance-level sentiment and emotion from text and audio data. Our introduced model outperforms state-of-the-art approaches on published datasets, and we demonstrate that our model's synchronized attention over modalities offers visual interpretability.Entities:
Year: 2018 PMID: 30505068 PMCID: PMC6261375
Source DB: PubMed Journal: Proc Conf Assoc Comput Linguist Meet ISSN: 0736-587X