To achieve efficient and accurate thick plate welding, as well as to precisely extract and plan the paths of complex three-dimensional weld seams in large steel structures, this study introduces a novel vision-guided approach for robotic welding systems utilizing a constant-focus laser sensor. This methodology specifically targets and mitigates several critical shortcomings inherent in conventional vision-guided welding techniques, including limited detection ranges, diminished precision in both detection and tracking, and suboptimal real-time performance. For preprocessed weld images, an improved grayscale extreme centroid method was developed to extract the center of the light stripe. Furthermore, a sophisticated feature point extraction algorithm, which integrates a maximum distance search strategy with a least-squares fitting procedure, was developed to facilitate the precise and timely identification of weld seam characteristic points. To further optimize the outcomes, a cylindrical filtering mechanism was employed to eliminate substantial discrepancies, whereas local Non-Uniform Rational B-Spline (NURBS) curve interpolation was utilized for the generation of smooth and accurate trajectory plans. A spatial vector-based pose adjustment strategy was then implemented to provide robust guidance for the welding robot, ensuring the successful execution of the welding operations. The experimental results indicated that the proposed algorithm achieved a tracking error of 0.3197 mm for welding workpieces with a thickness of 60 mm, demonstrating the method’s substantial potential in the manufacturing sector, especially in the domain of automated welding.
Read full abstract