Literature DB >> 28922114

Cross-Modal Scene Networks.

Yusuf Aytar, Lluis Castrejon, Carl Vondrick, Hamed Pirsiavash, Antonio Torralba.   

Abstract

People can recognize scenes across many different modalities beyond natural images. In this paper, we investigate how to learn cross-modal scene representations that transfer across modalities. To study this problem, we introduce a new cross-modal scene dataset. While convolutional neural networks can categorize scenes well, they also learn an intermediate representation not aligned across modalities, which is undesirable for cross-modal transfer applications. We present methods to regularize cross-modal convolutional neural networks so that they have a shared representation that is agnostic of the modality. Our experiments suggest that our scene representation can help transfer representations across modalities for retrieval. Moreover, our visualizations suggest that units emerge in the shared representation that tend to activate on consistent concepts independently of the modality.

Entities:  

Year:  2017        PMID: 28922114     DOI: 10.1109/TPAMI.2017.2753232

Source DB:  PubMed          Journal:  IEEE Trans Pattern Anal Mach Intell        ISSN: 0098-5589            Impact factor:   6.226


  1 in total

1.  Dynamic Invariant-Specific Representation Fusion Network for Multimodal Sentiment Analysis.

Authors:  Jing He; Haonan Yanga; Changfan Zhang; Hongrun Chen; Yifu Xua
Journal:  Comput Intell Neurosci       Date:  2022-01-24
  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.