| Literature DB >> 35744500 |
Jinghai Han1, Bo Liu2, Yongle Jia2, Shoufeng Jin2, Maciej Sulowicz3, Adam Glowacz3, Grzegorz Królczyk4, Zhixiong Li4,5.
Abstract
This work proposes a Kinect V2-based visual method to solve the human dependence on the yarn bobbin robot in the grabbing operation. In this new method, a Kinect V2 camera is used to produce three-dimensional (3D) yarn-bobbin point cloud data for the robot in a work scenario. After removing the noise point cloud through a proper filtering process, the M-estimator sample consensus (MSAC) algorithm is employed to find the fitting plane of the 3D cloud data; then, the principal component analysis (PCA) is adopted to roughly register the template point cloud and the yarn-bobbin point cloud to define the initial position of the yarn bobbin. Lastly, the iterative closest point (ICP) algorithm is used to achieve precise registration of the 3D cloud data to determine the precise pose of the yarn bobbin. To evaluate the performance of the proposed method, an experimental platform is developed to validate the grabbing operation of the yarn bobbin robot in different scenarios. The analysis results show that the average working time of the robot system is within 10 s, and the grasping success rate is above 80%, which meets the industrial production requirements.Entities:
Keywords: industrial robots; machine vision; robot control; yarn bobbin identification
Year: 2022 PMID: 35744500 PMCID: PMC9227217 DOI: 10.3390/mi13060886
Source DB: PubMed Journal: Micromachines (Basel) ISSN: 2072-666X Impact factor: 3.523
Figure 1Manual yarn feeding process.
Figure 2Developed a robot-based winding system.
Figure 3Kinect V2 acquired images: (a) RGB image; (b) IR image; (c) Depth image.
Figure 4Mapping results: (a) point cloud of yarn bobbin; (b) denoised point cloud.
Figure 5Extract yarn bobbin images via MSAC: (a) fitting the box bottom point cloud; (b) removing the box bottom point cloud; (c) removing the box side point cloud.
Figure 6Key point extraction: (a) extraction result of the point cloud of the yarn bobbin template; (b) extraction result of the point cloud of the yarn bobbin to be captured.
Figure 7Point cloud alignment results: (a) initial position; (b) coarse alignment; (c) fine alignment.
Kinect V2 depth camera calibration internal reference.
| Name of Parameter | Data |
|---|---|
| Focal length |
|
| Point coordinates |
|
| Radial distortion |
|
| Error |
|
Figure 8Schematic diagram of hand-eye calibration.
Yarn bobbin specifications.
| Types | Diameter/mm | Height (mm) | Weight (kg) | |
|---|---|---|---|---|
|
| Small 41 | Big 52 | 112 | 0.11 |
| Cylindrical | 47 | 125 | 0.12 | |
Figure 9Experimental tests in different scenarios: (a) Tower yarn bobbin without contact scenario; (b) Tower yarn-bobbin stacking scenario; (c) Cylindrical yarn bobbin without contact scenario; (d) Cylindrical yarn-bobbin stacking scenario.
Figure 10Diagram of the gripping process of part of the yarn bobbin. (a) grasping operation, (b) transmissing operation, and (c) loading operation.
Experimental results.
| Experimental Scenarios | Experimental Subjects | Average Time Taken | Number of Experiments | Number of Successes | Success Rate |
|---|---|---|---|---|---|
| No contact | Tower-shaped | 8.35 s | 150 | 138 | 92.0% |
| Cylindrical | 8.07 s | 150 | 139 | 92.7% | |
| Unordered stacking | Tower-shaped | 9.68 s | 150 | 127 | 84.7% |
| Cylindrical | 9.82 s | 150 | 129 | 86.0% |