| Literature DB >> 30445680 |
Wei Li1, Mingli Dong2, Naiguang Lu3,4, Xiaoping Lou5, Peng Sun6,7.
Abstract
An extended robot⁻world and hand⁻eye calibration method is proposed in this paper to evaluate the transformation relationship between the camera and robot device. This approach could be performed for mobile or medical robotics applications, where precise, expensive, or unsterile calibration objects, or enough movement space, cannot be made available at the work site. Firstly, a mathematical model is established to formulate the robot-gripper-to-camera rigid transformation and robot-base-to-world rigid transformation using the Kronecker product. Subsequently, a sparse bundle adjustment is introduced for the optimization of robot⁻world and hand⁻eye calibration, as well as reconstruction results. Finally, a validation experiment including two kinds of real data sets is designed to demonstrate the effectiveness and accuracy of the proposed approach. The translation relative error of rigid transformation is less than 8/10,000 by a Denso robot in a movement range of 1.3 m × 1.3 m × 1.2 m. The distance measurement mean error after three-dimensional reconstruction is 0.13 mm.Entities:
Keywords: Kronecker product; calibration object; hand–eye calibration; robot–world calibration; sparse bundle adjustment
Year: 2018 PMID: 30445680 PMCID: PMC6263626 DOI: 10.3390/s18113949
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1The robotic system of robot–world and hand–eye calibration.
Figure 2Schematic diagram of the synthetic experiment using the PUMA560 model.
Intrinsic parameters of the virtual camera for the synthetic experiment.
| Intrinsic Parameter | Image Resolution | Focal Length | Principle Point Offsets | Affine Distortion | Radial Distortion and Decentering Distortion |
|---|---|---|---|---|---|
| Value | 4288 × 2848 pixels | 20 mm | (0.1, 0.1) mm | 0 | 0 |
Denavit–Hartenberg parameters of the PUMA560 robot for the synthetic experiment.
| Joint | q | d | a | α | Offset/(°) |
|---|---|---|---|---|---|
| 1 | q1 | 0 | 0 | 0 | σ1 |
| 2 | q2 | 0.2435 | 0 | −90 | σ2 |
| 3 | q3 | −0.0934 | 0.4318 | 0 | σ3 |
| 4 | q4 | 0.4331 | −0.0203 | 90 | σ4 |
| 5 | q5 | 0 | 0 | −90 | σ5 |
| 6 | q6 | 0 | 0 | 90 | σ6 |
Figure 3Error of estimated rotation and translation against different noise levels η: (a,b) The rotation and translation errors with regard to hand–eye transformation X; (c,d) The rotation and translation errors with regard to robot–world transformation Z.
Figure 4Sample images of calibration scenarios taken by the camera mounted on the gripper of the robot: (a) Chessboard pattern scene; (b) Books scene.
Figure 5General object data set experiment: (a) Denso robot arm with Nikon camera; (b) 3D model output after bundle adjustment.
Error comparison for the general object data set without a chessboard pattern as benchmark (Unit: mm).
| Approach | Dornaika | Shah | KPherwc | BAherwc |
|---|---|---|---|---|
| Hand–eye transformation error | 3.945 | 2.337 | 3.409 | 1.145 |
| robot–world transformation error | 6.001 | 3.751 | 4.544 | 1.808 |
Figure 6Photogrammetric scene data set experiment: (a) Photogrammetric control field; (b) Distribution of camera pose and target points.
Error comparison in rotation and translation for the photogrammetric scene dataset.
| Approach | Rotation Error | Translation Error |
|---|---|---|
| Dornaika | 0.0023 | 0.0033 |
| Shah | 0.0015 | 0.0017 |
| BAherwc | 0.00047 | 0.00076 |
The average distance measurement errors of scale bars (Unit: mm)
| Scale Bar | Nominal Value | Measurement Value | Distance Measurement Error |
|---|---|---|---|
| S1 | 1096.037 | 1095.906 | 0.131 |
| S2 | 1096.057 | 1095.923 | 0.134 |
Figure 7Distance estimation error iteration at each iteration by bundle adjustment.