Abstract

This article addresses how to tackle one of the most demanding tasks in manufacturing and industrial maintenance sectors: using robots with a novel and robust solution to detect the fastener and its rotation in (un)screwing tasks over parallel surfaces with respect to the tool. To this end, the vision system is based on an industrial camera with a dynamic exposure time, a tunable liquid crystal lens (TLCL), and active near-infrared reflectance (NIR) illumination. Its camera parameters, combined with a fixed value of working distance (WD) and variable or constant field of view (FOV), make it possible to work with a variety of fastener sizes under several lighting conditions. This development also uses a collaborative robot with an embedded force sensor to verify the success of the fastener localization in a real test. Robust algorithms based on segmentation neural networks (SNN) and vision were developed to find the center and rotation of the hexagon fastener in a flawless condition and worn, scratched, and rusty conditions. SNNs were tested using a graphics processing unit (GPU), central processing unit (CPU), and edge devices, such as Jetson Javier Nx (JJNX), Intel Neural Compute Stick 2 (INCS2), and M.2 Accelerator with Dual Edge TPU (DETPU), with optimization parameters, such as the unsigned integer (UINT) and float (FP), to understand their performance. A virtual program logic controller (PLC) was mounted on a personal computer (PC) as the main control to process the images and save the data. Moreover, a mathematical analysis based on the international standard organization (ISO) and patents of the manual socket wrench was performed to determine the maximum error allowed. In addition, the work was substantiated using exhaustive evaluation tests, validating the tolerance errors, robotic forces for successfully completed tasks, and algorithms implemented. As a result of this work, the translation tolerances increase with higher sizes of fasteners from 0.75 for M6 to 2.50 for M24; however, the rotation decreases with the size from 5.5° for M6 to 3.5° for M24. The proposed methodology is a robust solution to tackle outliers contours and fake vertices produced by distorted masks present in non-constant illumination; it can reach an average accuracy to detect the vertices of 99.86% and the center of 100%, also, the time consumed by the SNN and the proposed algorithms is 73.91 ms on an Intel Core I9 CPU. This work is an interesting contribution to industrial robotics and improves current applications.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.