Abstract

Industrial robots are evolving rapidly in the manufacturing industry. There are two main techniques for programming the robots such as online and offline programming. However, the time spent on programming a new trajectory is a major challenge in deploying welding robots which makes these approaches less efficient. This paper presents a two-stage method employing multi-sensor interaction for the path planning of a welding robot. The proposed scheme enhances weld seam trajectory development and creates a highly adjustable intelligent guidance programming system for welding robots. A global stage approach utilizing the RGB-D camera which combines fast 2D object recognition and 3D reconstruction models is proposed to quickly identify the coarse trajectory. The processes of fast 2D object recognition and 3D reconstruction are carried out using a deep neural network model and stereo vision sensor module. This technique can be a better replacement for offline programming or hand gesture controls also known as the teaching trajectory, particularly for the welding robot application. The local positioning stage is then applied using the laser vision module to obtain more precise information of the local environment, guided by the coarse trajectory that was realized in the prior stage (global stage). The efficacy of the proposed system is analyzed by conducting numerous tests using an experimental setup. The experimental finding demonstrates that the suggested study has great potential to automate the welding robot in the manufacturing sector.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call