Abstract

Caging grasps provide a way to manipulate an object without full immobilization and enable dealing with the pose uncertainties of the object. Most previous works have constructed caging sets by using the geometric models of the object. This work aims to present a learning-based method for caging a novel object only with its image. A caging set is first defined using the constrained region, and a mapping from the image feature to the caging set is then constructed with kernel regression function. Avoiding the collection of large number of samples, a multi-task learning method is developed to build the regression function, where several different caging tasks are trained with a joint model. In order to transfer the caging experience to a new caging task rapidly, shape similarity for caging knowledge transfer is utilized. Thus, given only the shape context for a novel object, the learner is able to accurately predict the caging set through zero-shot learning. The proposed method can be applied to the caging of a target object in a complex real-world environment, for which the user only needs to know the shape feature of the object, without the need for the geometric model. Several experiments prove the validity of our method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.