| Literature DB >> 31993461 |
Davide Marelli1, Simone Bianco1, Gianluigi Ciocca1.
Abstract
This article presents a dataset with 4000 synthetic images portraying five 3D models from different viewpoints under varying lighting conditions. Depth of field and motion blur have also been used to generate realistic images. For each object, 8 scenes with different combinations of lighting, depth of field and motion blur are created and images are taken from 100 points of view. Data also includes information about camera intrinsic and extrinsic calibration parameters for each image as well as the ground truth geometry of the 3D models. The images were rendered using Blender. The aim of this dataset is to allow evaluation and comparison of different solutions for 3D reconstruction of objects starting from a set of images taken under different realistic acquisition setups.Entities:
Keywords: 3D reconstruction; Blender; Realistically rendered images; Structure from Motion (SfM)
Year: 2019 PMID: 31993461 PMCID: PMC6971370 DOI: 10.1016/j.dib.2019.105041
Source DB: PubMed Journal: Data Brief ISSN: 2352-3409
Fig. 13D models used for synthetic data generation: (a) Statue [4]. (b) Empire Vase [5]. (c) Hydrant [6]. (d) Bicycle [7]. (e) Jeep [8].
List of available images sets for each object in the dataset.
| Set name / data subfolders | Lighting setup | Depth of field | Motion blur |
|---|---|---|---|
| fs | Sun, fixed position | No | No |
| fs-dof | Sun, fixed position | Yes, on all images | No |
| fs-mb | Sun, fixed position | No | Yes, on random images |
| fs-dof-mb | Sun, fixed position | Yes, on all images | Yes, on random images |
| ms | Sun, random position | No | No |
| ms-dof | Sun, random position | Yes, on all images | No |
| ms-mb | Sun, random position | No | Yes, on random images |
| ms-dof-mb | Sun, random position | Yes, on all images | Yes, on random images |
Samples of data in cameras.csv files.
| Image number | Position X | Position Y | Position Z | Rotation W | Rotation X | Rotation Y | Rotation Z | Look-at X | Look-at Y | Look-at Z | Depth of field | Motion blur | Sun azimuth | Sun inclination |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0001 | 7.007345 | −6.647154 | 3.905988 | 0.765872 | 0.507658 | 0.218024 | 0.328920 | −0.667915 | 0.634178 | −0.389497 | True | True | −2.212487 | 1.049574 |
| 0002 | 6.736142 | −7.548378 | 4.048048 | 0.777758 | 0.517985 | 0.197374 | 0.296358 | −0.614036 | 0.688747 | −0.385470 | True | False | −1.935121 | 0.965231 |
| 0003 | 6.036173 | −7.806846 | 3.986951 | 0.788677 | 0.523167 | 0.178511 | 0.269107 | −0.563151 | 0.729142 | −0.388861 | True | False | 0.000000 | 1.744416 |
| 0004 | 5.710015 | −8.167952 | 3.760057 | 0.788783 | 0.536442 | 0.168756 | 0.248139 | −0.532448 | 0.762523 | −0.367503 | True | False | −0.531763 | 1.270089 |
| 0005 | 5.202869 | −8.192430 | 4.075651 | 0.803880 | 0.525362 | 0.152563 | 0.233443 | −0.490569 | 0.773427 | −0.401438 | True | True | −0.476696 | 1.311158 |
Fig. 2Example of data generation steps for the Jeep model: (a) 3D model. (b) Scene setup with main object, floor surface and lights. (c) Camera animation around the object. (d) Images rendering. (e) 3D model geometry and camera pose ground truth export.
Fig. 3Sample of rendered images of the Jeep model. (a) image from the ‘fs’ set. (b) image from the ‘fs-dof-mb’ set. (c, d) images from the ‘ms’ set.
Specifications Table
| Subject | Computer Vision and Pattern Recognition |
| Specific subject area | 3D reconstruction from images |
| Type of data | Image |
| How data were acquired | Images of some virtual scene portraying the 3D models were rendered using Blender. The camera pose parameters were exported in plain text files. |
| Data format | Raw |
| Parameters for data collection | 3D scenes with different subjects, varying lighting conditions, depth of field and motion blur; acquired by a moving camera from various poses. |
| Description of data collection | 4000 synthetic images of five different 3D scenes, were rendered by means of Blender. Each scene was rendered using different camera poses, lighting condition, depth of field and motion blur. Cameras calibration parameters, intrinsic and extrinsic, were collected for each rendered image. |
| Data source location | Institution: University of Milano – Bicocca |
| Data accessibility | Data are available on Mendeley Data at |
| Related research article | S. Bianco, G. Ciocca, D. Marelli. Evaluating the Performance of Structure from Motion Pipelines. J. Imaging 2018, 4, 98. |
The data can be used to evaluate and compare 3D reconstructions of single objects from multiple images obtained using various techniques. The different lighting and acquisition conditions are introduced to make the dataset suitable to test the robustness of the reconstruction pipelines on different image acquisition setups. The data is of interest to researchers that would like to test and compare various 3D reconstruction methods to check the results of different approaches to the reconstruction of single objects. Can be used to assess the performance of state-of-the-art methods as well as to evaluate and compare new techniques. The data allow to evaluate the impact of variations in illumination conditions, depth of field and motion blur on the reconstruction pipelines. The data can be used to determine how a 3D reconstruction method reacts when used on images of objects with differences in size, geometry and texture details. The data contains information about camera intrinsic and extrinsic calibration parameters that allow precise camera positioning, reconstruction estimation, and evaluation. This is highly relevant in the case of evaluation of reconstructions made by techniques that assume unknown camera poses (e.g. Structure from Motion) and reconstructions pipelines that require known camera poses, such as Multi View Stereo (MVS). The synthetic data generation process allows to provide along with the images precise information about camera positioning and geometry ground truth, such level of ground truth's accuracy allows precise evaluation of the reconstructed 3D geometry. |