Abstract

Caging grasps would handle objects without full immobilization and enable dealing with the object‘s position and orientation uncertainties. Most previous works have constructed caging sets by using the geometric models of the object. This work aims to present a learning-based method for caging objects only with its image. A multi-task learning method is developed for caging grasps, where the caging region is directly learned from the image of the object. Furthermore, several different caging tasks are trained with a joint model using the sample data of the tasks. Therefore, we could avoid the collection of plenty of training samples. Simulations show the validity of the method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call