Abstract

In order to improve industrial production efficiency, a hand–eye system based on 3D vision is proposed and the proposed system is applied to the assembly task of workpieces. First, a hand–eye calibration optimization algorithm based on data filtering is proposed in this paper. This method ensures the accuracy required for hand–eye calibration by filtering out part of the improper data. Furthermore, the improved U-net is adopted for image segmentation and SAC-IA coarse registration ICP fine registration method is adopted for point cloud registration. This method ensures that the 6D pose estimation of the object is more accurate. Through the hand–eye calibration method based on data filtering, the average error of hand–eye calibration is reduced by 0.42 mm to 0.08 mm. Compared with other models, the improved U-net proposed in this paper has higher accuracy for depth image segmentation, and the Acc coefficient and Dice coefficient achieve 0.961 and 0.876, respectively. The average translation error, average rotation error and average time-consuming of the object recognition and pose estimation methods proposed in this paper are 1.19 mm, 1.27°, and 7.5 s, respectively. The experimental results show that the proposed system in this paper can complete high-precision assembly tasks.

Highlights

  • The field of automatic robotic assembly has attracted much attention

  • The experiment results show that the general automatic assembly sy

  • The experiment results show that the general automatic assembly system based on 3D

Read more

Summary

Introduction

The field of automatic robotic assembly has attracted much attention. In recent years, automatic robotic assembly technology has been gradually applied to various fields such as automobiles, aerospace, and electronics manufacturing. In automatic robotic assembly tasks, the robot is guided by vision sensors or force/torque (F/T) sensors to complete the assembly work. Song et al proposed robotic assembly skill learning with deep Q-learning using visual object pose estimation coupled with admittance control to promote robotic shaft-in-hole perspectives and force sensing to learn an assembly policy [10]. Song While et al proposed robotic assembly learning with sensor deep Q-learning using visual the combination of askill force/torque and a vision sensor can solve the perspectives and force sensing to learn an assembly policy [10]. Wang et al developed a problem of automatic assembly by a robot, the force/torque sensor is expensive, which high-precision assembly system combining robotic vision servo technology and robot will the cost of the[11]. Deep learning; the Section 4 verifies the advantages and feasibility of the system; the Section 5 summarizes the work of the paper future research issues

Mathematical
Schematic
Hand–Eye Calibration Optimization Based on Data Filtering
Object Segmentation and Recognition Based on Improved U-Net
Experiment and Discussion
Hand–Eye Calibration Experiment
Hand–Eye
Pose Estimation and Assembly Experiment
Pose Estimation andExperiment
17. The depth image of theplug:
Our Method
Method
21. The posture by the sys this paper is betweenthe
Conclusions

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.