Literature DB >> 33017291

End-to-End Video Saliency Detection via a Deep Contextual Spatiotemporal Network.

Lina Wei, Shanshan Zhao, Omar Farouk Bourahla, Xi Li, Fei Wu, Yueting Zhuang, Junwei Han, Mingliang Xu.   

Abstract

As an interesting and important problem in computer vision, learning-based video saliency detection aims to discover the visually interesting regions in a video sequence. Capturing the information within frame and between frame at different aspects (such as spatial contexts, motion information, temporal consistency across frames, and multiscale representation) is important for this task. A key issue is how to jointly model all these factors within a unified data-driven scheme in an end-to-end fashion. In this article, we propose an end-to-end spatiotemporal deep video saliency detection approach, which captures the information on spatial contexts and motion characteristics. Furthermore, it encodes the temporal consistency information across the consecutive frames by implementing a convolutional long short-term memory (Conv-LSTM) model. In addition, the multiscale saliency properties for each frame are adaptively integrated for final saliency prediction in a collaborative feature-pyramid way. Finally, the proposed deep learning approach unifies all the aforementioned parts into an end-to-end joint deep learning scheme. Experimental results demonstrate the effectiveness of our approach in comparison with the state-of-the-art approaches.

Year:  2021        PMID: 33017291     DOI: 10.1109/TNNLS.2020.2986823

Source DB:  PubMed          Journal:  IEEE Trans Neural Netw Learn Syst        ISSN: 2162-237X            Impact factor:   10.451


  1 in total

Review 1.  Revolution or Evolution? Technical Requirements and Considerations towards 6G Mobile Communications.

Authors:  Saddam Alraih; Ibraheem Shayea; Mehran Behjati; Rosdiadee Nordin; Nor Fadzilah Abdullah; Asma' Abu-Samah; Dalia Nandi
Journal:  Sensors (Basel)       Date:  2022-01-20       Impact factor: 3.576

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.