| Literature DB >> 35632319 |
Marius Sumanas1, Algirdas Petronis1, Vytautas Bucinskas1, Andrius Dzedzickis1, Darius Virzonis1, Inga Morkvenaite-Vilkonciene1.
Abstract
Recent industrial robotics covers a broad part of the manufacturing spectrum and other human everyday life applications; the performance of these devices has become increasingly important. Positioning accuracy and repeatability, as well as operating speed, are essential in any industrial robotics application. Robot positioning errors are complex due to the extensive combination of their sources and cannot be compensated for using conventional methods. Some robot positioning errors can be compensated for only using machine learning (ML) procedures. Reinforced machine learning increases the robot's positioning accuracy and expands its implementation capabilities. The provided methodology presents an easy and focused approach for industrial in situ robot position adjustment in real-time during production setup or readjustment cases. The scientific value of this approach is a methodology using an ML procedure without huge external datasets for the procedure and extensive computing facilities. This paper presents a deep q-learning algorithm applied to improve the positioning accuracy of an articulated KUKA youBot robot during operation. A significant improvement of the positioning accuracy was achieved approximately after 260 iterations in the online mode and initial simulation of the ML procedure.Entities:
Keywords: deep q-learning; machine learning; positioning errors; reinforced learning; robot operating system ROS; robotics
Mesh:
Year: 2022 PMID: 35632319 PMCID: PMC9147322 DOI: 10.3390/s22103911
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.847
Summary of research focused on trajectory generation and its accuracy.
| Aims | Methods | Hardware | Achievements | Ref. |
|---|---|---|---|---|
| To improve the accuracy of the welding robot | Calibration based on deep reinforcement learning. | Yaskawa MA1-440 with controller DX200, laser vision sensor, | Achieved control error of less than 0.8 mm | [ |
| To develop an open access dataset to verify robot calibration algorithms. | Levenberg–Marquardt (LM) algorithm and extended Kalman filter (EKF) | ABB IRB120 robot | The maximum positioning error is decreased by 68.07% | [ |
| To reduce the absolute position error of robots | Machine vision and neural network | Hyundai Hi5 (HA006 model) 6-axis industrial robot, pneumatic gripper, laser measurement system, camera | Positional error reduced by 50.3%, reaching its absolute value equal to 0.029 mm | [ |
| To improve celerity and accuracy of positioning for the spatial pose of the delta robot | Basic and optimized BP neural networks | MATLAB simulation | Delta robot system can achieve 97.75% accurate positioning with ± 0.05 mm tolerance | [ |
| To develop a positioning error prediction model based on an extreme learning machine algorithm | An extreme learning machine algorithm | KUKA KR210 R2700, a laser tracker, and an accompanying spherically mounted reflector (SMR) | The accuracy of the robot was improved by 75.89% and 80.93% | [ |
| To develop a system for automatic segmentation of the spine, pedicle identification, and screw path suggestion for use with an intraoperative 3D surgical navigation system. | Automated model-based approach. Accuracy was evaluated by comparing automatic segmentation to the manually outlined reference surface on Cone-beam images. | – | Success rate achieved of pedicle screw planning accuracy equal to 95.4% | [ |
| Integrate accuracy enhancement method for a Cable-Driven Continuum Robot (CDCR) | The kinematic model and data-driven Gaussian Process Regression technique | Prototype of the | Position and orientation | [ |
| To develop a method for complex robot inverse kinematics calibration | Inverse kinematic model based on multilayer perceptron | “Sina” surgical robot, infrared tracker | After calibration, positioning and orientation accuracy improved by 53% and 43%, respectively | [ |
Summary of research on robot grasping technology enhancement using machine learning.
| Aims | Methods | Hardware | Achievements | Ref. |
|---|---|---|---|---|
| To develop an image positioning and identification system for coal and gangue sorting robot | Least squares support vector machines | Industrial computer, V-GE502GC-T-CL | 88.3% identification accuracy of the coal and 90.0% of the gangue sample | [ |
| To build a robotic system that integrates grasping, vision, and motion planning to be able to pick items from a shelf to specific order boxes | Combination of machine learning and conventional feature-based strategy | Two lightweight UR5 robot manipulators, 3 stereo cameras, and 2 custom-built grippers | The system was able to pick 10 target items correctly in around 8 min | [ |
| To incorporate force/torque information into reinforcement learning | Iterative Linear-Quadratic-Gaussian algorithm | Rethink Robotics Sawyer robot | Results show that using force/torque data, assembling accuracy of precise components could be increased | [ |
| To develop a method combining a quality inspection system and process control | A convolutional neural network and computer vision | Kuka KR120 robotic arm Keyence LJ-7080 laser profilometers | A system able to detect defects and provide their quantitative characteristics | [ |
| To develop a method for complex-shaped object position estimation after grasping | Machine learning-based classification method | A robotic arm equipped with a parallel gripper | The presented approach can be used as a good solution to overcome the possible uncertainties during the execution of a grasping task. | [ |
| To develop a method estimating the geometric primitives of multiple circles in the 3D space for robot-assisted industrial automation | Multiple circular contours extraction, Maximum Likelihood Estimation SampleConsensus (MLESAC), Rodrigues formula, Delaunay triangulation, hierarchical clustering | KUKA KR 6 robot, AccuProfile 820-60 laser, linear motion system Rexroth Bosch | The method successfully implemented automation of the riveting of the fastener components on an aerospace structure | [ |
| To implement semantic tasks, reach to grasp method for the industrial robot | Semantic grasp planning, model-based trajectory generation | Kinect depth sensor, | Object discovery accuracy 95.8% Grasping accuracy 81.2% | [ |
Summary of research on sensors and instrumentation in robotics.
| Aims | Methods | Hardware | Achievements | Ref. |
|---|---|---|---|---|
| To compensate for undermined calibration values, sensor movement latency, and displacement offsets of IMU | Multilayer perceptrons, deep neural networks | IMU and IR sensors | 69% reduction in tracking errors | [ |
| To improve the accuracy of the force/torque sensor | Linear regression, | High dynamic range F/T sensor based on flexure mechanism | Accuracy has been improved using time-series data for sensor calibration | [ |
| To calibrate augmented reality device using 3D depth sensor data | Neural network based on the VoteNet architecture | Microsoft Hololens, | Elimination of external tools used for augmented reality data calibration | [ |
| To develop a methodology to detect and localize external contact | Random Forests and multilayer perceptrons | Proprioceptive sensors (joint positions, velocities, and one-dimensional (1D) joint torques. Kinova Jaco 2 manipulator | The time constant to detect contact equals 0.005 s in cases with a high contact force gradient | [ |
| To improve the accuracy of IMUs used for position tracking | ML regression models based on long short-term memory | Xsens Avatar. 17 IMU’s | The proposed method ensures a lower average error of position tracking | [ |
| To develop a fault prediction system for industrial robots | Gaussian mixture model-based unsupervised fault detection framework s | Industrial robot, | Prediction of gear wear faults in the robot with higher than 96% accuracy | [ |
Figure 1Control system scheme: blue—Python programs, green—C++ program, yellow—physical peripherals, grey—physical computer elements and accessories.
Figure 2Principle of experimental and simulation algorithms in Main module (from Figure 1).
Figure 3Robot “home” position (A) and the target position (B). The robot moved between these two positions during each cycle of the experiments. 1—target, 2—robot, and 3—microscope.
Figure 4Algorithm configuration parameters and their significance.
Figure 5Variation of positioning of the final coordinates of (A) y-axis; (B) z-axis. 1—with correction and 2—without correction.
Figure 6Variation of positioning deviation (error) with respect to the number of iterations: 1—with correction and 2—without correction.