Literature DB >> 31352333

Every Pixel Counts ++: Joint Learning of Geometry and Motion with 3D Holistic Understanding.

Chenxu Luo, Zhenheng Yang, Peng Wang, Yang Wang, Wei Xu, Ramkant Nevatia, Alan Yuille.   

Abstract

Learning to estimate 3D geometry in a single frame and optical flow from consecutive frames by watching unlabeled videos via deep convolutional network has made significant progress recently. Current state-of-the-art (SoTA) methods treat the two tasks independently. One important assumption of the existing depth estimation methods is that the scenes contain no moving object. In this paper, we propose to address the two tasks as a whole, i.e. to jointly understand per-pixel 3D geometry and motion. This eliminates the need of static scene assumption and enforces the inherent geometrical consistency during the learning process, yielding significantly improved results for both tasks. We call our method as "Every Pixel Counts++" or "EPC++". Various loss terms are formulated to jointly supervise the learning across geometrical cues and effective adaptive training strategy is proposed to achieve better performance. Comprehensive experiments were conducted on datasets with different scenes, including driving scenario (KITTI 2012 and KITTI 2015 datasets), mixed outdoor/indoor scenes (Make3D) and synthetic animation (MPI Sintel dataset). Performance on the five tasks of depth estimation, optical flow estimation, odometry, moving object segmentation and scene flow estimation shows that our approach outperforms other SoTA methods, demonstrating the effectiveness of each module of our proposed method.

Year:  2019        PMID: 31352333     DOI: 10.1109/TPAMI.2019.2930258

Source DB:  PubMed          Journal:  IEEE Trans Pattern Anal Mach Intell        ISSN: 0098-5589            Impact factor:   6.226


  5 in total

1.  Unsupervised Learning of Depth and Camera Pose with Feature Map Warping.

Authors:  Ente Guo; Zhifeng Chen; Yanlin Zhou; Dapeng Oliver Wu
Journal:  Sensors (Basel)       Date:  2021-01-30       Impact factor: 3.576

2.  Online supervised attention-based recurrent depth estimation from monocular video.

Authors:  Dmitrii Maslov; Ilya Makarov
Journal:  PeerJ Comput Sci       Date:  2020-11-23

3.  Monocular Depth Estimation with Self-Supervised Learning for Vineyard Unmanned Agricultural Vehicle.

Authors:  Xue-Zhi Cui; Quan Feng; Shu-Zhi Wang; Jian-Hua Zhang
Journal:  Sensors (Basel)       Date:  2022-01-18       Impact factor: 3.576

4.  Self-supervised recurrent depth estimation with attention mechanisms.

Authors:  Ilya Makarov; Maria Bakhanova; Sergey Nikolenko; Olga Gerasimova
Journal:  PeerJ Comput Sci       Date:  2022-01-31

5.  RAUM-VO: Rotational Adjusted Unsupervised Monocular Visual Odometry.

Authors:  Claudio Cimarelli; Hriday Bavle; Jose Luis Sanchez-Lopez; Holger Voos
Journal:  Sensors (Basel)       Date:  2022-03-30       Impact factor: 3.576

  5 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.