Abstract

In industrial robotic contact-based operations it is necessary to have a detailed geometrical knowledge of the work-piece to automate the generation of the working trajectory. In many cases, the digital model is not available or it differs from its as built actual conditions. Vision sensors, especially 3D vision sensors allows to scan the work-piece and reconstruct its digital copy that is used for the generation of the robotic working trajectory. In this paper we compare two algorithms for the generation of the 3D model of an unknown work-piece based on the use of the RGB-D images taken from different perspectives. The first technique, based on standard image reconstruction methods commonly used for reconstruction of indoor scenes where error in the order of few centimeters is negligible, is exploited in this work in order to be exploited in contact-based robotic operations where the geometrical error has to be limited to few millimeters. It is based on the analysis of the images to estimate the camera pose taking each image. Based on the pose, the algorithm integrates then the information contained in the images in a unique volume representing the scene. The second algorithm directly uses the known poses of the robot, stored while capturing each image, to integrate the information to create the model. The two algorithms are compared to evaluate the accuracy, acquisition time and elaboration time. The analysis is done considering two kinds of objects, the first one with regular shape, the second one corresponding to an actual industrial object.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call