Literature DB >> 33488378

A Manufacturing-Oriented Intelligent Vision System Based on Deep Neural Network for Object Recognition and 6D Pose Estimation.

Guoyuan Liang1,2,3,4, Fan Chen1, Yu Liang1, Yachun Feng1,2,3,4, Can Wang1,2,3,4, Xinyu Wu1,2,3,4.   

Abstract

Nowadays, intelligent robots are widely applied in the manufacturing industry, in various working places or assembly lines. In most manufacturing tasks, determining the category and pose of parts is important, yet challenging, due to complex environments. This paper presents a new two-stage intelligent vision system based on a deep neural network with RGB-D image inputs for object recognition and 6D pose estimation. A dense-connected network fusing multi-scale features is first built to segment the objects from the background. The 2D pixels and 3D points in cropped object regions are then fed into a pose estimation network to make object pose predictions based on fusion of color and geometry features. By introducing the channel and position attention modules, the pose estimation network presents an effective feature extraction method, by stressing important features whilst suppressing unnecessary ones. Comparative experiments with several state-of-the-art networks conducted on two well-known benchmark datasets, YCB-Video and LineMOD, verified the effectiveness and superior performance of the proposed method. Moreover, we built a vision-guided robotic grasping system based on the proposed method using a Kinova Jaco2 manipulator with an RGB-D camera installed. Grasping experiments proved that the robot system can effectively implement common operations such as picking up and moving objects, thereby demonstrating its potential to be applied in all kinds of real-time manufacturing applications.
Copyright © 2021 Liang, Chen, Liang, Feng, Wang and Wu.

Entities:  

Keywords:  6D pose estimation; deep neural network; intelligent manufacturing; object recognition; semantic segmentation

Year:  2021        PMID: 33488378      PMCID: PMC7817625          DOI: 10.3389/fnbot.2020.616775

Source DB:  PubMed          Journal:  Front Neurorobot        ISSN: 1662-5218            Impact factor:   2.650


  2 in total

1.  DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs.

Authors:  Liang-Chieh Chen; George Papandreou; Iasonas Kokkinos; Kevin Murphy; Alan L Yuille
Journal:  IEEE Trans Pattern Anal Mach Intell       Date:  2017-04-27       Impact factor: 6.226

2.  SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation.

Authors:  Vijay Badrinarayanan; Alex Kendall; Roberto Cipolla
Journal:  IEEE Trans Pattern Anal Mach Intell       Date:  2017-01-02       Impact factor: 6.226

  2 in total
  2 in total

1.  Wide-angle, monocular head tracking using passive markers.

Authors:  Balazs P Vagvolgyi; Ravikrishnan P Jayakumar; Manu S Madhav; James J Knierim; Noah J Cowan
Journal:  J Neurosci Methods       Date:  2021-12-27       Impact factor: 2.390

2.  DOPE++: 6D pose estimation algorithm for weakly textured objects based on deep neural networks.

Authors:  Mei Jin; Jiaqing Li; Liguo Zhang
Journal:  PLoS One       Date:  2022-06-08       Impact factor: 3.752

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.