Abstract

Modern manufacturing processes are characterized by growing product diversities and complexities alike. As a result, the demand for fast and flexible process automation is ever increasing. However, higher individuality and smaller batch sizes hamper the use of standard robotic automation systems, which are well suited for repetitive tasks but struggle in unknown environments. Modern manipulators, such as collaborative industrial robots, provide extended capabilities for flexible automation. In this paper, an adaptive ROS-based end-to-end toolchain for vision-guided robotic process automation is presented. The processing steps comprise several consecutive tasks: CAD-based object registration, pose generation for sensor-guided applications, trajectory generation for the robotic manipulator, the execution of sensor-guided robotic processes, test and the evaluation of the results. The main benefits of the ROS framework are readily applicable tools for digital twin functionalities and established interfaces for various manipulator systems. To prove the validity of this approach, an application example for surface reconstruction was implemented with a 3D vision system. In this example, feature extraction is the basis for viewpoint generation, which, in turn, defines robotic trajectories to perform the inspection task. Two different feature point extraction algorithms using neural networks and Voronoi covariance measures, respectively, are implemented and evaluated to demonstrate the versatility of the proposed toolchain. The results showed that complex geometries can be automatically reconstructed, and they outperformed a standard method used as a reference. Hence, extensions to other vision-controlled applications seem to be feasible.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call