Abstract

In the field of hand-eye coordination, most state-of-the-art systems still require the user to select the grasping points manually. We present a system which autonomously determines 3D grasping points on unknown objects from a pair of greyscale images. The object to be grasped is segmented automatically when put into the scene. Grasping points are searched on the object silhouette; their stability is evaluated by a heuristic algorithm, primarily based on the skeleton of the region. The 3D grasping pose is estimated by triangulation using a simplified geometrical model of the camera system; the corresponding points in the second image are determined via dynamic programming. The whole system has been implemented and validated on the experimental hand-eye system MINERVA.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call