Literature DB >> 33497325

Referring Segmentation in Images and Videos With Cross-Modal Self-Attention Network.

Linwei Ye, Mrigank Rochan, Zhi Liu, Xiaoqin Zhang, Yang Wang.   

Abstract

We consider the problem of referring segmentation in images and videos with natural language. Given an input image (or video) and a referring expression, the goal is to segment the entity referred by the expression in the image or video. In this paper, we propose a cross-modal self-attention (CMSA) module to utilize fine details of individual words and the input image or video, which effectively captures the long-range dependencies between linguistic and visual features. Our model can adaptively focus on informative words in the referring expression and important regions in the visual input. We further propose a gated multi-level fusion (GMLF) module to selectively integrate self-attentive cross-modal features corresponding to different levels of visual features. This module controls the feature fusion of information flow of features at different levels with high-level and low-level semantic information related to different attentive words. Besides, we introduce cross-frame self-attention (CFSA) module to effectively integrate temporal information in consecutive frames which extends our method in the case of referring segmentation in videos. Experiments on benchmark datasets of four referring image datasets and two actor and action video segmentation datasets consistently demonstrate that our proposed approach outperforms existing state-of-the-art methods.

Entities:  

Year:  2022        PMID: 33497325     DOI: 10.1109/TPAMI.2021.3054384

Source DB:  PubMed          Journal:  IEEE Trans Pattern Anal Mach Intell        ISSN: 0098-5589            Impact factor:   6.226


  1 in total

1.  RGB-D based multi-modal deep learning for spacecraft and debris recognition.

Authors:  Nouar AlDahoul; Hezerul Abdul Karim; Mhd Adel Momo
Journal:  Sci Rep       Date:  2022-03-10       Impact factor: 4.379

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.