| Literature DB >> 26247950 |
Junzhi Guan1, Francis Deboeverie2, Maarten Slembrouck3, Dirk van Haerenborgh4, Dimitri van Cauwelaert5, Peter Veelaert6, Wilfried Philips7.
Abstract
In this paper, we propose a novel extrinsic calibration method for camera networks using a sphere as the calibration object. First of all, we propose an easy and accurate method to estimate the 3D positions of the sphere center w.r.t. the local camera coordinate system. Then, we propose to use orthogonal procrustes analysis to pairwise estimate the initial camera relative extrinsic parameters based on the aforementioned estimation of 3D positions. Finally, an optimization routine is applied to jointly refine the extrinsic parameters for all cameras. Compared to existing sphere-based 3D position estimators which need to trace and analyse the outline of the sphere projection in the image, the proposed method requires only very simple image processing: estimating the area and the center of mass of the sphere projection. Our results demonstrate that we can get a more accurate estimate of the extrinsic parameters compared to other sphere-based methods. While existing state-of-the-art calibration methods use point like features and epipolar geometry, the proposed method uses the sphere-based 3D position estimate. This results in simpler computations and a more flexible and accurate calibration method. Experimental results show that the proposed approach is accurate, robust, flexible and easy to use.Entities:
Keywords: Camera network; extrinsic calibration; orthogonal procrustes; sphere-based calibration
Year: 2015 PMID: 26247950 PMCID: PMC4570356 DOI: 10.3390/s150818985
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1Projection of a sphere.
Figure 2Images used for calibration. (a) Original captured image; (b) Binary image after simple thresholding
Figure 3Mean distance error comparison.
Figure 4Scale error comparison with different number of training samples.
Accuracy comparison between our method and the method of Agrawal et al. The first column represents the number of training samples. The numbers are listed as “a/s” where a is the average over 100 experiments and s is the standard deviation.
| Ours | Agrawal | Ours | Agrawal | Ours | Agrawal | |
|---|---|---|---|---|---|---|
| 06 | 6.2/3.6 | 12.3/3.0 | 5.1/2.3 | 8.5/2.2 | 4.2/3.4 | 9.8/2.5 |
| 09 | 5.3/1.1 | 11.6/2.0 | 4.6/0.6 | 8.3/1.6 | 3.4/1.0 | 9.2/1.5 |
| 12 | 5.2/0.8 | 10.9/1.5 | 4.6/0.4 | 8.1/1.3 | 3.2/0.6 | 8.6/1.0 |
| 15 | 5.0/0.5 | 11.0/1.3 | 4.5/0.3 | 8.3/1.2 | 3.0/0.4 | 8.5/0.9 |
| 18 | 5.0/0.6 | 10.7/1.0 | 4.5/0.3 | 8.1/0.9 | 3.0/0.4 | 8.4/0.6 |
| 21 | 4.9/0.4 | 10.7/1.0 | 4.5/0.2 | 8.1/0.9 | 2.9/0.3 | 8.3/0.7 |
| 24 | 4.9/0.3 | 10.6/0.8 | 4.5/0.2 | 8.1/0.8 | 2.8/0.2 | 8.2/0.5 |
| 27 | 4.8/0.3 | 10.3/0.6 | 4.4/0.2 | 7.9/0.6 | 2.7/0.2 | 8.1/0.4 |
| 30 | 4.8/0.2 | 10.4/0.5 | 4.5/0.1 | 8.0/0.5 | 2.7/0.1 | 8.1/0.3 |
| 33 | 4.8/0.2 | 10.4/0.5 | 4.4/0.1 | 8.0/0.5 | 2.7/0.1 | 8.1/0.3 |
| 36 | 4.8/0.1 | 10.4/0.4 | 4.4/0.1 | 8.0/0.4 | 2.7/0.1 | 8.1/0.3 |
| 39 | 4.8/0.1 | 10.3/0.3 | 4.4/0.1 | 8.0/0.3 | 2.6/0.1 | 8.0/0.2 |
| 42 | 4.7/0.1 | 10.3/0.2 | 4.4/0.1 | 8.0/0.2 | 2.6/0.1 | 8.0/0.1 |
Projection error (pixel). The first column represent the number of training samples, and the first row gives the methods used for comparison. The refinement method uses the result of Orthogonal procrustes as initial guess, and then applies the proposed refinement method to improve the accuracy. The numbers are listed as “a/s” where a is the average over 100 experiments and s is the standard deviation
| Rank-4 | Orthogonal Procrustes | Refinement | Svoboda | |
|---|---|---|---|---|
| 03 | N/A | 6.4/1.8 | 5.3/1.4 | N/A |
| 04 | 7.9/5.4 | 5.8/1.4 | 3.5/0.7 | N/A |
| 05 | 6.5/2.3 | 5.5/1.4 | 3.2/0.3 | N/A |
| 06 | 6.4/2.3 | 5.5/1.5 | 3.2/0.2 | N/A |
| 07 | 6.3/1.8 | 5.6/1.4 | 3.1/0.1 | N/A |
| 08 | 6.0/1.6 | 5.3/1.3 | 3.1/0.1 | 4.1/2.0 |
| 09 | 6.2/1.8 | 5.4/1.3 | 3.1/0.1 | 3.6/0.6 |
| 12 | 6.4/1.7 | 5.2/0.7 | 3.1/0.0 | 3.3/0.3 |
| 15 | 6.2/1.3 | 5.1/0.6 | 3.1/0.0 | 3.3/0.2 |
| 18 | 6.2/1.0 | 5.0/0.5 | 3.1/0.0 | 3.3/0.1 |
| 21 | 6.0/0.8 | 4.9/0.3 | 3.1/0.0 | 3.2/0.1 |
| 24 | 6.0/0.7 | 4.9/0.3 | 3.1/0.0 | 3.2/0.1 |
| 27 | 6.0/0.5 | 4.8/0.2 | 3.1/0.0 | 3.2/0.1 |
| 30 | 6.0/0.5 | 4.8/0.2 | 3.1/0.0 | 3.2/0.1 |
Triangulation error (cm). The first column represents the number of training samples, and the first row gives the methods used for comparison. The refinement method uses the result of Orthogonal procrustes as initial guess, and then applies the proposed refinement method to improve the accuracy. The numbers are listed as “a/s” where a is the average over 100 experiments and s is the standard deviation.
| Rank-4 | Orthogonal Procrustes | Refinement | Svoboda | |
|---|---|---|---|---|
| 03 | N/A | 4.8/0.8 | 4.5/0.7 | N/A |
| 04 | 5.5/1.6 | 4.7/0.7 | 3.5/0.4 | N/A |
| 05 | 5.1/0.9 | 4.5/0.6 | 3.4/0.2 | N/A |
| 06 | 5.2/1.1 | 4.6/0.7 | 3.4/0.1 | N/A |
| 07 | 5.2/1.0 | 4.7/0.7 | 3.3/0.1 | N/A |
| 08 | 5.0/0.8 | 4.5/0.6 | 3.3/0.0 | 4.2/1.6 |
| 09 | 5.2/0.9 | 4.6/0.7 | 3.3/0.1 | 3.8/0.6 |
| 12 | 5.3/0.9 | 4.5/0.4 | 3.3/0.0 | 3.6/0.3 |
| 15 | 5.3/0.7 | 4.5/0.4 | 3.3/0.0 | 3.6/0.2 |
| 18 | 5.3/0.6 | 4.5/0.3 | 3.3/0.0 | 3.5/0.1 |
| 21 | 5.2/0.5 | 4.5/0.2 | 3.3/0.0 | 3.5/0.1 |
| 24 | 5.2/0.4 | 4.5/0.2 | 3.3/0.0 | 3.5/0.1 |
| 27 | 5.3/0.4 | 4.5/0.2 | 3.3/0.0 | 3.5/0.1 |
| 30 | 5.3/0.3 | 4.5/0.2 | 3.3/0.0 | 3.4/0.1 |
Reprojection error (pixel). The first column represents the number of training samples, and the first row gives the methods used for comparison. The refinement method uses the result of Orthogonal procrustes as initial guess, and then applies the proposed refinement method to improve the accuracy. The numbers are listed as “a/s” where a is the average over 100 experiments and s is the standard deviation.
| Rank-4 | Orthogonal Procrustes | Refinement | Svoboda | |
|---|---|---|---|---|
| 03 | N/A | 4.5/1.7 | 3.3/1.6 | N/A |
| 04 | 6.1/5.5 | 3.9/1.3 | 1.2/0.8 | N/A |
| 05 | 4.8/2.5 | 3.6/1.3 | 0.7/0.4 | N/A |
| 06 | 4.6/2.3 | 3.6/1.4 | 0.7/0.3 | N/A |
| 07 | 4.3/1.8 | 3.6/1.3 | 0.6/0.2 | N/A |
| 08 | 4.0/1.6 | 3.3/1.3 | 0.5/0.2 | 0.8/1.1 |
| 09 | 4.3/1.7 | 3.4/1.1 | 0.5/0.2 | 0.5/0.2 |
| 12 | 4.4/1.5 | 3.2/0.6 | 0.5/0.1 | 0.4/0.1 |
| 15 | 4.1/1.1 | 3.0/0.5 | 0.5/0.1 | 0.4/0.1 |
| 18 | 4.1/0.8 | 3.0/0.4 | 0.5/0.1 | 0.4/0.1 |
| 21 | 3.9/0.6 | 2.8/0.2 | 0.5/0.1 | 0.4/0.1 |
| 24 | 3.9/0.5 | 2.8/0.2 | 0.4/0.1 | 0.3/0.0 |
| 27 | 3.9/0.4 | 2.8/0.2 | 0.4/0.1 | 0.3/0.0 |
| 30 | 3.8/0.3 | 2.7/0.1 | 0.4/0.0 | 0.3/0.0 |
Calibration accuracy of the network with only two cameras. The second row and the third row represent the mean and the standard deviation over all 40 test samples, respectively.
| mean | 3.2 | 3.5 | 0.4 |
| std | 2.4 | 1.6 | 0.4 |