Abstract

In this paper, we present a multimodal framework for offline learning of generative models of object deformation under robotic pushing. The model is multimodal in that it is based on integrating force and visual information. The framework consists of several submodels that are independently calibrated from the same data. These component models can be sequenced to provide many-step prediction and classification. When presented with a test example—a robot finger pushing a deformable object made of an unidentified, but previously learned, material—the predictions of modules for different materials are compared so as to classify the unknown material. Our approach, which consists of offline learning and combination of multiple models, goes beyond previous techniques by enabling: 1) predictions over many steps; 2) learning of plastic and elastic deformation from real data; 3) prediction of forces experienced by the robot; 4) classification of materials from both force and visual data; and 5) prediction of object behavior after contact by the robot terminates. While previous work on deformable object behavior in robotics has offered one or two of these features none has offered a way to achieve them all, and none has offered classification from a generative model. We do so through separately learned models which can be combined in different ways for different purposes.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.