Literature DB >> 27798651

A Novel Camera Calibration Method Based on Polar Coordinate.

Shaoyan Gai1,2,3, Feipeng Da1, Xu Fang1.   

Abstract

A novel calibration method based on polar coordinate is proposed. The world coordinates are expressed in the form of polar coordinates, which are converted to world coordinates in the calibration process. In the beginning, the calibration points are obtained in polar coordinates. By transformation between polar coordinates and rectangular coordinates, the points turn into form of rectangular coordinates. Then, the points are matched with the corresponding image coordinates. At last, the parameters are obtained by objective function optimization. By the proposed method, the relationships between objects and cameras are expressed in polar coordinates easily. It is suitable for multi-camera calibration. Cameras can be calibrated with fewer points. The calibration images can be positioned according to the location of cameras. The experiment results demonstrate that the proposed method is an efficient calibration method. By the method, cameras are calibrated conveniently with high accuracy.

Entities:  

Mesh:

Year:  2016        PMID: 27798651      PMCID: PMC5087901          DOI: 10.1371/journal.pone.0165487

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.240


Introduction

With the development of photo-electronics, image processing, information sensing, signal processing, and electronics, digital camera is becoming increasingly relevant in science and technology [1-14]. Camera calibration is an essential step in computer vision, image processing, and optical measurement [1-14], which makes it possible to obtain metric information of the object from the projections on the image plane. The accuracy of vision system is very sensitive to the camera parameters [15-18]. A tiny error in estimating camera parameters may adversely affect the whole system performance. Camera calibration has been widely studied, which falls into several categories [18-19]. One category of the methods is called coplanar approaches. These methods rely on calibration points, which are on a planar template in a single depth. These approaches are either computationally complex or fail to provide result solutions for one camera parameters or more, e.g., the image center, the scale factor, or lens distortion coefficients[19]. The category of calibration methods based on World-reference are classical approaches, which requires a set of calibration points with two dimensional image coordinates and corresponding of three dimensional world coordinates [20-22]. Zhang [23] proposed a flexible calibration method, which only requires a few images of a two-dimensional calibration plate taken from different orientations. Based on this method, a set of optimal conditions is proposed to improve the calibration results accurately[24]. The disadvantage of these methods is that a complex and high-precise calibration template is used to achieve precise 3D measurements. One-dimension calibration method is also studied [25]. By which, a 1D calibration object is placed at several positions and in different orientations[25]. Many researchers focus on the camera modeling and analysis, which is very important for stereo vision and display[26-31]. Yang and Song model two kinds of cameras, a parallel and a converged one, and analyze the difference between them in horizontal and vertical parallax[26]. The conclusion is that converged arrays are more suitable for short-distance. The work can be guidance to camera arrays applications. And it is pointed that, for future work on the camera array, it is necessary to focus on camera calibration and visual stereo-video evaluation. In reference[27], two types of stereo cameras are studied, which are parallel and toed-in cameras. The objective shooting quality evaluation criteria over short distance is proposed. Furthermore, three shooting conditions (macro shooting, short, and long distance shooting) are discussed in reference [28]. The shooting quality of stereo cameras can be evaluated effectively by the proposed approach. In reference [29], a full-reference metric for quality assessment of stereoscopic images is proposed. It is based on the binocular difference channel and binocular summation channel. Reference [31] use point correspondences between model plane and image to calculate the homograph and distortion coefficients. A calibration process is applied, which is non-iterative with no risk of local minima. It’s a one-shot algorithm that can be solved by linear least-squares technique. Generally, the 3D world coordinates and 2D image plan coordinates are related by calibration objects. They are in Cartesian coordinates usually. But some images are very complex, and cannot be represent in rectangular coordinate system. For example, rose line, Archimedes line, etc. For these images, the method based on rectangular coordinates cannot be used generally. But the characteristics of the curve is obvious. They are suitable for the feature point extraction and matching. Thus in this paper, polar coordinates are introduced to indicate the location of the calibration points. The coordinates of the points are obtained in polar coordinates, and then converted into rectangular coordinate system. The point can be easily expressed. The calibration can be applied without increasing system complex. Polar coordinate images are suitable for multiple-camera calibration. The calibration object for multiple-camera has always been a problem: the 3D calibration objects may be hidden partly in one of the camera images; Two-dimension calibration plates are designed large and complex due to the scope of cameras in multiple locations. Two-dimensional flexible spliced calibration board are studied [32, 33]. Also one-dimensional calibration objects are studied [34, 35, 36] to solve the problems However there are movement restrictions in these solutions. Theodolite is used to obtain the precise and relative position [37]. However, additional equipment and complex operations is required. In this paper, a whole solution of calibration by polar coordinates is presented. This method is in a novel view of calibration, that is the polar coordinates. The relationship in polar coordinates image is relatively simple, which can be easily used in the multiple-camera calibration plate. Furthermore, the layout of the polar coordinates calibration plate can be designed according to the camera position. Thus the manufacture difficulty of calibration objects is reduced. And the complexity of the calibration is simplified. The method is useful in practice. Unlike the traditional methods based on rectangular coordinate system, the calibration board is designed in polar coordinates, and the difficulties in multiple-camera calibration is overcome. In the second section, the camera model and the calibration method is proposed in detail. In the third section, the simulation experiment and real experiment are presented. In the last section, the method of this paper is summarized.

Methods

Pinhole projection model

A camera is modeled by the usual pinhole model, in which case the relationship between a 3D point (Xw,Yw,Zw) and its image projection (u,v) is given by where where (Xw,Yw,Zw) is the 3D coordinates of the point at the world coordinate system. In the camera image plan, the point image is produced at point (u,v). ρ is an arbitrary scale factor. fx, fy, u0, v0, θ are camera intrinsics, which is included in the camera intrinsic matrix M1, where fx, fy are the scale factors in image abscissa and vertical axes respectively, and u0, v0 define the coordinates of the principal point in the image plan of the camera, and θ is the parameter describing the skewness of the two image axes. R, T are the rotation and translation matrixes, which comprise the camera extrinsic matrix M2, which define the spatial relationship of camera and the world coordinate system. R is a general 3×3 orthogonal rotation matrix, and T is a 3×1 translation matrix. M1 and M2 are camera intrinsic matrix and extrinsic matrix respectively. H is the perspective projection matrix (3×4), which is the product of M1 and M2.

Calibration in polar coordinate

In view of expressing graphics in polar coordinates, there are two ideas: The world coordinates and computer image coordinates are both expressed in polar coordinates; The world coordinates are expressed in polar coordinates, which are converted to world coordinates in the calibration process. Computer image coordinates are in rectangular coordinates. In the case of method 1, the system have to be modified to satisfy Eq (1). The left of Eq (1) should be in polar coordinates. However the image hardware implementation is a representation of pixel rows and columns, which is arranged according to the rectangular coordinate system. Thus we use the second method. Some calibration image patterns, e.g. Archimedes line and rose line in Fig 1 and Fig 2, are suitable for polar coordinates. The characteristic point are in form of polar coordinates. Then the rectangular coordinates of the points are obtained by coordinate transformation. And the corresponding image coordinates are matched with the points. By this simple transformation, it will not increase the complexity of the system.
Fig 1

Archimedes curve.

Fig 2

Rose Curve.

Computation of H

As we known it in Eq (1), H is a 3×4 matrix. The calibration image base on polar coordinate is in a plane. Without loss of generality, the plane is Zw = 0. Then Eq (1) becomes As can be seen from Eq (2), the third column of H is omitted since Zw = 0. Thus H turned into a 3×3 matrix, which contains 9 unknowns. Since there is a scaling factor ρ, there are only eight unknown parameters. By four (or more) groups of corresponding points (u,v)—(Xw,Yw,Zw), the matrix H can be calculated.

Computation of camera parameters

By Eq (1) and Eq (2), H can be expressed as a column-matrix form, as follows Where r1, r2 is the first and second columns of orthogonal rotation matrix R. T is the translation matrix. By the nature of orthogonal matrix, we have Where t denotes matrix transposition. From Eq (2) and Eq (3), we have According to Eq (5), two equations can be obtained from each image. There are five unknowns in the intrinsic matrix M1. If there are three images, the intrinsic matrix can be determined with Eq (5). If there are more than 3 images, an optimal solution can be worked out. Then, the extrinsic matrix can be obtained by the result of M1 and Eq (4), as follows where λ = 1/‖M1-1h1‖ = 1/‖M1-1h2‖. The result is a primary solution, without lens distortion. Furthermore, we consider lens distortion, which can be express as Where (u,v) and (ud,vd) are the ideal pixel image coordinates and the real image coordinates, respectively. k1,k2 are coefficients of radial distortion. Then, the optimal solution with radial distortion can be calculated. The objective function for optimization is defined as where p is the actual coordinates of the j-th point in i-th image, (ud,vd) is the pixel image coordinates, is the calculation result of coordinates, n denotes the total number of images, and m denotes the number of points in each image.

Multiple-camera calibration

Due to the simple transform relationship in polar coordinates, it is convenient for multiple-camera calibration. Calibration board can be designed as the graphics shown in Fig 3. By the relationship between each board image and its center (r,θ), it is easy to determine the absolute position of the points in polar coordinates.
Fig 3

Multi-camera calibration board based on cameras’ positions.

The great advantage of polar coordinates in multiple-camera calibration is as follows. It does not require common areas for all the cameras. For example, for camera 1, the center of the world coordinates is set to polar coordinates center 1. For camera 2, the center of the world coordinates is set to polar coordinates center 2. For camera 3, the center of the world coordinates is set to polar coordinates center 3. The center of the world coordinates of camera 2 and 3, is about the position of calibration board center. In this case, the calibration board can be changed according to the location of cameras. In other word, the location is not fixed in advance, but on or after the image is snapped. Thus it solves the problem of accumulated error caused by transfer matrix. As shown in Fig 4, center of polar coordinates 1 is (0, 0) in world coordinate, center of polar coordinates 2 is (24, 17) in world coordinate, and center of polar coordinates 3 is (48, -36) in world coordinate. Thus the world coordinate of the cameras is the same one. More than the traditional multiple camera calibration methods, where it requires public viewing area between two cameras [35,36], the proposed approach is free of public viewing area. It is not considered that data may be lost because cameras are of imperfect convergence configuration. The reason is that the calibration distance is shorter than the retrieval distance of the object. Thus there is no or few data-missing problem. The flexibility of calibration is enhanced to a large extent.
Fig 4

Multi-camera calibration board with known positions of centers.

Experiment Results

Experiment 1 (Simulation)

In the following simulation experiments, the calibration objects is in form of Archimedes curve, as shown in Fig 1. The parameters are p = θ (θ = 0:6π), θ = 45° × k, k is integer, and θ ≥ 225°. A set of points, which are 12 points, are made as feature points. The intrinsic matrix M1 is: The extrinsic parameters of three images are: 1st image: α1 = 16.9°, β1 = -11.6°, γ1 = -27.3°, T1 = [3,3,10] 2nd image: α1 = 26.8°, β1 = -8.6°, γ1 = -38.5°, T1 = [2,2,10] 3rd image: α1 = 28.3°, β1 = -19.9°, γ1 = 40.1°, T1 = [1,2,10] According to the above parameters, 3 calibration images are generated with a resolution of 2048 by 2048. In the projected image, gaussian noise with zero mean is added. The standard deviation of the noise is varying from 0 to 1 at an interval of 0.1 pixels. The calculation is performed 200 times under each noise level. The obtained parameters are shown. The Average of parameters under different noise levels are shown in Table 1 and Table 2, and the standard deviation is shown in Figs 5, 6 and 7. The horizontal ordinates of Figs 5–7 denote the noise level in pixel. The vertical ordinates denote the standard deviation of parameters. The standard deviation of u0, v0 is shown in Fig 5. The standard deviation of fx, fy is shown in Fig 6. The standard deviation of α,β,γ is shown in Fig 7.
Table 1

Average of intrinsic parameters under different noise levels.

NLfxfyu0v0
0600.0000600.00001000.00001000.0000
0.1599.9834599.9775999.98021000.0040
0.2599.8631599.8718999.9760999.9273
0.3600.0144600.01891000.07781000.0089
0.4599.8394599.8087999.8691999.9723
0.5599.7578599.77901000.0098999.8752
0.6599.6746599.6829999.9355999.8019
0.7600.3498600.1680999.86101000.4103
0.8599.8403599.90391000.0818999.8760
0.9600.3305600.37181000.19111000.0553
1.0599.4579599.53751000.0651999.7686
Table 2

Average of extrinsic parameters under different noise level.

NLαβγT1
016.9034-11.5991-27.2548[3.0000 3.0000 10.000]
0.116.9031-11.5981-27.2550[3.0004 2.9999 9.9996]
0.216.8989-11.5978-27.2540[3.0003 3.0012 9.9980]
0.316.9043-11.6056-27.2539[2.9986 2.9999 10.001]
0.416.8966-11.5940-27.2558[3.0021 3.0005 9.9974]
0.516.9017-11.6034-27.2499[2.9996 3.0020 9.9966]
0.616.8888-11.5962-27.2526[3.0008 3.0031 9.9941]
0.716.9100-11.6037-27.2639[3.0022 2.9935 10.003]
0.816.9039-11.5907-27.2534[2.9988 3.0019 9.9979]
0.916.9109-11.6122-27.2544[2.9964 2.9991 10.007]
1.016.9005-11.5984-27.2509[2.9984 3.0035 9.9926]
Fig 5

Standard deviations for parameters u0, v0.

Fig 6

Standard deviations for parameters fx, fy.

Fig 7

Standard deviations for parameters α,β,γ.

It can be clearly seen that, intrinsic parameters deviation is of the order of magnitude of 0.01 to 0.1, while extrinsic parameters deviation is of the order of magnitude of 1. Since the value of intrinsic parameters is large, the proportion of the deviation may be small. For example, fx = 1000, and the max deviation of fx is 0.6, thus the proportion of the deviation is 0.6%. In case of the extrinsic parameters, the max deviation of α is 0.225, and the proportion of the deviation is 1.3%. In conclusion, the noise impact on the results is relatively small. The algorithm is of high stability. To study the inference of the numbers of pictures, we try different numbers of pictures. The results of 3 pictures and 10 pictures are shown in Figs 8–10. The horizontal ordinates of Figs 8–10 denote the noise level in pixel. The vertical ordinates denote the standard deviation of parameters. The Standard deviations for intrinsic parameters fx, u0 and extrinsic parameters α are shown in Fig 8, Fig 9 and Fig 10, respectively.
Fig 8

the standard deviations of fx with 3 and 10 pictures.

Fig 10

the standard deviations of α with 3 and 10 pictures.

Fig 9

the standard deviations of u0 with 3 and 10 pictures.

As shown in Figs 8–10, the stability of 10 pictures is better than results of 3 pictures. But with 10 pictures, the calibration processing is complex significantly, while the improvement is not notable. In practice, 3–5 pictures are suitable for calibration applications.

Experiment 2 (Real data experiments)

The real data experiments were carried out in our laboratory. To demonstrate the efficiency of the proposed method, camera calibration based on proposed calibration image and accurate calibration plate are conducted. And zhang’smethod, which is the classical calibration method, is also peformed. A UNIQ UP-1800 camera with fixed focal length is placed in front of the calibration boards. The image resolution is 1380×1030 pixels. The training data for Zhang’s calibration is obtained from the accurate calibration board. While the training data for the proposed method is obtained from the polar coordinates calibration board. The images of calibration boards are shown in Fig 11, where the accurate board is at the top, and the polar coordinates board is at the bottom.
Fig 11

Real Pictures—the accurate calibration board is at the top, the polar coordinates calibration board is at the bottom.

The results are listed as follows The calibration results of intrinsic parameters by the proposed method: The calibration results of intrinsic parameters by accurate calibration plate: Both of algorithms obtain accurate calibration results. From a practical point of view, the proposed method is more simple and convenient, and the broad is of high applicability. The relationship between Polar coordinates images is simple. Thus Polar coordinates calibration images can be easily used for multiple camera calibration. The calibration can be performed by pre-made calibration board, as shown in Fig 12, the respective relationship of which is known before snapping images. Also the relationship can be calculated according to the snapped images, as shown in Fig 3. In this case, the calibration images can be positioned according to the location of the cameras.
Fig 12

Multiple-camera calibration board.

The experiment results of multiple camera calibration are shown as follows. The optimized parameters of left camera are The optimized parameters of right camera are The diagrams of retrieval results are shown in Figs 13 and 14.
Fig 13

retrieval results based on proposed method.

Fig 14

retrieval results based on Zhang’s method.

As shown in Figs 13 and 14, 3D point cloud are obtained successfully by both the methods. There is little difference in the results. The retrieval result of the plate is consistent with the result by Zhang’s method, thus the calibration result is close to Zhang’s method. The distances between adjacent 3D points are calculated. The distance results are gathered along horizontal and vertical direction.By Zhang’s method, the average distance in horizontal and vertical direction is 2.553, and the standard deviation is 0.01619. By the proposed method, the corresponding average distance and deviation is 2.648, 0.016724. It is shown that the proposed method can be used for multiple camera calibration conveniently.

Conclusion

A novel study on camera calibration was carried out in the polar coordinate. There are several advantages of the proposed method. Some calibration image patterns are flexible in polar coordinate. Fewer calibration points are used. To calibrate multiple camera, the calibration images are placed according to the location of the camera. There is no accumulated error caused by transfer matrix. Zhang’s method, as well as the proposed method, are chosen for experimentation on both simulated and real data. The accuracy were evaluated. The efficiency of the proposed method is demonstrated in the experiments.

Data of intrinsic parameters.

(XLSX) Click here for additional data file.

Data of extrinsic parameters.

(XLSX) Click here for additional data file.

Data of parameters with 3 pictures.

(XLSX) Click here for additional data file.

Data of parameters with 10 pictures.

(XLSX) Click here for additional data file.
  13 in total

1.  Non-iterative method for camera calibration.

Authors:  Yuzhen Hong; Guoqiang Ren; Enhai Liu
Journal:  Opt Express       Date:  2015-09-07       Impact factor: 3.894

2.  Rapid super-resolution line-scanning microscopy through virtually structured detection.

Authors:  Yanan Zhi; Rongwen Lu; Benquan Wang; Qiuxiang Zhang; Xincheng Yao
Journal:  Opt Lett       Date:  2015-04-15       Impact factor: 3.776

3.  Flexible three-dimensional measurement technique based on a digital light processing projector.

Authors:  Feipeng Da; Shaoyan Gai
Journal:  Appl Opt       Date:  2008-01-20       Impact factor: 1.980

4.  Camera calibration under optimal conditions.

Authors:  Carlos Ricolfe-Viala; Antonio-Jose Sanchez-Salmeron
Journal:  Opt Express       Date:  2011-05-23       Impact factor: 3.894

5.  Differential signal-assisted method for adaptive analysis of fringe pattern.

Authors:  Chenxing Wang; Feipeng Da
Journal:  Appl Opt       Date:  2014-09-20       Impact factor: 1.980

6.  Novel 3D measurement system based on speckle and fringe pattern projection.

Authors:  Shaoyan Gai; Feipeng Da; Xianqiang Dai
Journal:  Opt Express       Date:  2016-08-08       Impact factor: 3.894

7.  A novel method of object detection from a moving camera based on image matching and frame coupling.

Authors:  Yong Chen; Rong hua Zhang; Lei Shang
Journal:  PLoS One       Date:  2014-10-29       Impact factor: 3.240

8.  A flexible calibration method using the planar target with a square pattern for line structured light vision system.

Authors:  Qiucheng Sun; Yueqian Hou; Qingchang Tan; Guannan Li
Journal:  PLoS One       Date:  2014-09-09       Impact factor: 3.240

9.  Quality Index for Stereoscopic Images by Separately Evaluating Adding and Subtracting.

Authors:  Jiachen Yang; Yancong Lin; Zhiqun Gao; Zhihan Lv; Wei Wei; Houbing Song
Journal:  PLoS One       Date:  2015-12-30       Impact factor: 3.240

10.  A Novel Method of Automatic Plant Species Identification Using Sparse Representation of Leaf Tooth Features.

Authors:  Taisong Jin; Xueliang Hou; Pifan Li; Feifei Zhou
Journal:  PLoS One       Date:  2015-10-06       Impact factor: 3.240

View more
  1 in total

1.  Accuracy evaluation of hand-eye calibration techniques for vision-guided robots.

Authors:  Ikenna Enebuse; Babul K S M Kader Ibrahim; Mathias Foo; Ranveer S Matharu; Hafiz Ahmed
Journal:  PLoS One       Date:  2022-10-19       Impact factor: 3.752

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.