| Literature DB >> 28216555 |
Tao Liu1, Yin Guo2, Shourui Yang3, Shibin Yin4, Jigui Zhu5.
Abstract
Industrial robots are expected to undertake ever more advanced tasks in the modern manufacturing industry, such as intelligent grasping, in which robots should be capable of recognizing the position and orientation of a part before grasping it. In this paper, a monocular-based 6-degree of freedom (DOF) pose estimation technology to enable robots to grasp large-size parts at informal poses is proposed. A camera was mounted on the robot end-flange and oriented to measure several featured points on the part before the robot moved to grasp it. In order to estimate the part pose, a nonlinear optimization model based on the camera object space collinearity error in different poses is established, and the initial iteration value is estimated with the differential transformation. Measuring poses of the camera are optimized based on uncertainty analysis. Also, the principle of the robotic intelligent grasping system was developed, with which the robot could adjust its pose to grasp the part. In experimental tests, the part poses estimated with the method described in this paper were compared with those produced by a laser tracker, and results show the RMS angle and position error are about 0.0228° and 0.4603 mm. Robotic intelligent grasping tests were also successfully performed in the experiments.Entities:
Keywords: industrial robot; intelligent grasping; monocular; pose estimation
Year: 2017 PMID: 28216555 PMCID: PMC5336033 DOI: 10.3390/s17020334
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1Schematic of robot intelligent grasping system.
Figure 2Calibration of the camera i-th pose frame.
Figure 3Schematic of the pinhole camera model.
Figure 4Image space error and the object space error.
Figure 5Simulation of the uncertainty.
Figure 6Experimental setup for robot intelligent grasping system.
Coordinates of the featured points on the roof.
| Point | |||
|---|---|---|---|
| 0 | 0 | 0 | |
| 1218.078 | 0 | 0 | |
| 1218.028 | 2058.764 | 0 | |
| −1.017 | 2059.039 | −2.677 |
Intrinsic parameters of camera.
| 2421.402 | 2419.707 | 1237.997 | 971.382 | −0.02568 | 0.07839 | 0.00103 | 0.0007 |
Figure 7Subpixel extraction of the featured hole center: (a) Original image; (b) Single pixel edge; (c) Gray gradient; (d) Subpiexl edge points.
Measured results compared with tracker.
| LT | −2.31544 | 3.407908 | −3.50354 | −31.260 | −12.268 | 33.642 | ||
| Mono | −2.30752 | 3.405357 | −3.47866 | −31.246 | −12.335 | 34.239 | ||
| Δ | 0.00792 | −0.00255 | 0.02488 | 0.026234 | 0.014 | −0.067 | 0.597 | 0.600911 |
| LT | −2.27619 | −3.79836 | −3.30576 | 28.382 | −26.748 | 45.021 | ||
| Mono | −2.28559 | −3.7911 | −3.29375 | 28.285 | −26.815 | 45.161 | ||
| Δ | −0.0094 | 0.007264 | 0.012007 | 0.016893 | −0.097 | −0.067 | 0.14 | 0.183025 |
| LT | 4.28209 | 2.254749 | −2.33946 | −39.525 | 52.237 | −43.422 | ||
| Mono | 4.296024 | 2.238525 | −2.35284 | −39.494 | 52.295 | −44.117 | ||
| Δ | 0.013934 | −0.01622 | −0.01338 | 0.025225 | 0.031 | 0.058 | −0.695 | 0.698105 |
| LT | −4.6618 | 2.774759 | −4.95201 | 38.082 | −36.264 | −33.598 | ||
| Mono | −4.66049 | 2.761622 | −4.96092 | 38.19 | −36.41 | −33.677 | ||
| Δ | 0.001316 | −0.01314 | −0.00892 | 0.015932 | 0.108 | −0.146 | −0.079 | 0.198043 |
| LT | 4.084893 | 4.630798 | −2.95192 | −50.432 | −46.475 | 34.246 | ||
| Mono | 4.108526 | 4.643033 | −2.95696 | −50.472 | −46.832 | 34.342 | ||
| Δ | 0.023633 | 0.012235 | −0.00504 | 0.027085 | −0.04 | −0.357 | 0.096 | 0.37184 |
Figure 8Status of the gripper: (a) Initial part with initial path; (b) Moved part with initial path; (c) Moved part with corrected path.