Literature DB >> 23782808

Virtual mirror rendering with stationary RGB-D cameras and stored 3-D background.

Ju Shen1, Po-Chang Su, Sen-Ching Samson Cheung, Jian Zhao.   

Abstract

Mirrors are indispensable objects in our lives. The capability of simulating a mirror on a computer display, augmented with virtual scenes and objects, opens the door to many interesting and useful applications from fashion design to medical interventions. Realistic simulation of a mirror is challenging as it requires accurate viewpoint tracking and rendering, wide-angle viewing of the environment, as well as real-time performance to provide immediate visual feedback. In this paper, we propose a virtual mirror rendering system using a network of commodity structured-light RGB-D cameras. The depth information provided by the RGB-D cameras can be used to track the viewpoint and render the scene from different prospectives. Missing and erroneous depth measurements are common problems with structured-light cameras. A novel depth denoising and completion algorithm is proposed in which the noise removal and interpolation procedures are guided by the foreground/background label at each pixel. The foreground/background label is estimated using a probabilistic graphical model that considers color, depth, background modeling, depth noise modeling, and spatial constraints. The wide viewing angle of the mirror system is realized by combining the dynamic scene, captured by the static camera network with a 3-D background model created off-line, using a color-depth sequence captured by a movable RGB-D camera. To ensure a real-time response, a scalable client-and-server architecture is used with the 3-D point cloud processing, the viewpoint estimate, and the mirror image rendering are all done on the client side. The mirror image and the viewpoint estimate are then sent to the server for final mirror view synthesis and viewpoint refinement. Experimental results are presented to show the accuracy and effectiveness of each component and the entire system.

Year:  2013        PMID: 23782808     DOI: 10.1109/TIP.2013.2268941

Source DB:  PubMed          Journal:  IEEE Trans Image Process        ISSN: 1057-7149            Impact factor:   10.856


  3 in total

1.  A comparative study of registration methods for RGB-D video of static scenes.

Authors:  Vicente Morell-Gimenez; Marcelo Saval-Calvo; Jorge Azorin-Lopez; Jose Garcia-Rodriguez; Miguel Cazorla; Sergio Orts-Escolano; Andres Fuster-Guillo
Journal:  Sensors (Basel)       Date:  2014-05-15       Impact factor: 3.576

2.  A Fast and Robust Extrinsic Calibration for RGB-D Camera Networks.

Authors:  Po-Chang Su; Ju Shen; Wanxin Xu; Sen-Ching S Cheung; Ying Luo
Journal:  Sensors (Basel)       Date:  2018-01-15       Impact factor: 3.576

3.  Relative Pose Based Redundancy Removal: Collaborative RGB-D Data Transmission in Mobile Visual Sensor Networks.

Authors:  Xiaoqin Wang; Y Ahmet Şekercioğlu; Tom Drummond; Vincent Frémont; Enrico Natalizio; Isabelle Fantoni
Journal:  Sensors (Basel)       Date:  2018-07-26       Impact factor: 3.576

  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.