For manufacturing robots equipped with 3D vision sensors, the presence of environmental interference significantly impedes the precision of edge extraction. Existing edge feature extraction methods often enhance adaptability to interference at the expense of final extraction precision. This paper introduces a novel 3D visual edge detection method that ensures greater precision while maintaining adaptability, capable of addressing various forms of interference in real manufacturing scenarios. To address the challenge, data-driven and traditional visual approaches are integrated. Deep groove edge feature extraction and guidance tasks are used as a case study. R-CNN and improved OTSU algorithm with adaptive threshold are combined to identify groove features. Subsequently, a scale adaptive average slope sliding window algorithm is devised to extract groove edge points, along with a corresponding continuity evaluation algorithm. Real data is used to validate the performance of the proposed method. The experiment results show that the average error in processing interfered data is 0.29mm, with an average maximum error of 0.54mm, exhibiting superior overall performance and precision compared to traditional and data-driven methods.
Read full abstract