Literature DB >> 30183619

Dense 3D Object Reconstruction from a Single Depth View.

Bo Yang, Stefano Rosa, Andrew Markham, Niki Trigoni, Hongkai Wen.   

Abstract

In this paper, we propose a novel approach, 3D-RecGAN++, which reconstructs the complete 3D structure of a given object from a single arbitrary depth view using generative adversarial networks. Unlike existing work which typically requires multiple views of the same object or class labels to recover the full 3D geometry, the proposed 3D-RecGAN++ only takes the voxel grid representation of a depth view of the object as input, and is able to generate the complete 3D occupancy grid with a high resolution of 2563 by recovering the occluded/missing regions. The key idea is to combine the generative capabilities of 3D encoder-decoder and the conditional adversarial networks framework, to infer accurate and fine-grained 3D structures of objects in high-dimensional voxel space. Extensive experiments on large synthetic datasets and real-world Kinect datasets show that the proposed 3D-RecGAN++ significantly outperforms the state of the art in single view 3D object reconstruction, and is able to reconstruct unseen types of objects.

Year:  2018        PMID: 30183619     DOI: 10.1109/TPAMI.2018.2868195

Source DB:  PubMed          Journal:  IEEE Trans Pattern Anal Mach Intell        ISSN: 0098-5589            Impact factor:   6.226


  1 in total

Review 1.  Recent Advancements in Learning Algorithms for Point Clouds: An Updated Overview.

Authors:  Elena Camuffo; Daniele Mari; Simone Milani
Journal:  Sensors (Basel)       Date:  2022-02-10       Impact factor: 3.576

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.