Abstract

This work provides a novel real-time pipeline for modeling and grasping of unknown objects with a humanoid robot. Such a problem is of great interest for the robotic community, since conventional approaches fail when the shape, dimension or pose of the objects are missing. Our approach reconstructs in real-time a model for the object under consideration and represents the robot hand with proper and mathematically usable models, i.e. superquadric functions. The volume graspable by the hand is represented by an ellipsoid and is defined a-priori, because the shape of the hand is known in advance. The superquadric representing the object is obtained in real-time from partial vision information instead, e.g. one stereo view of the object under consideration, and provides an approximated 3D full model. The optimization problem we formulate for the grasping pose computation is solved online by using the Ipopt software package and, thus, does not require off-line computation or learning. Even though our approach is for a generic humanoid robot, we developed a complete software architecture for executing this approach on the iCub humanoid robot. Together with that, we also provide a tutorial on how to use this framework. We believe that our work, together with the available code, is of a strong utility for the iCub community for three main reasons: object modeling and grasping are relevant problems for the robotic community, our code can be easily applied on every iCub and the modular structure of our framework easily allows extensions and communications with external code.

Highlights

  • Industrial robotics shows how high performance in manipulation can be achieved if a very accurate knowledge of the environment and the objects is provided

  • The method and the code, we propose in this work, consist of reconstructing an object model through the stereo vision system of the robot and using this information to compute a suitable grasping pose

  • We do not go into the mathematical details (extensively reported in Vezzani et al (2017)) whereas we focus on the description of the code designed for using the approach on the iCub, since we believe it to be useful for any user interested in object modeling and grasping tasks

Read more

Summary

INTRODUCTION

Industrial robotics shows how high performance in manipulation can be achieved if a very accurate knowledge of the environment and the objects is provided. We present a novel framework for modeling and grasping unknown objects with the iCub humanoid robot. The iCub humanoid robot is provided with two 7DOF arms, 5 fingers human-like hands, whose fingertips are covered by tactile sensors and two cameras, as described in Metta et al (2010). It turns out to be a suitable platform for investigating objects perception and grasping problem: the stereo vision system and the tactile sensors can be exploited together to get proper information for modeling and grasping unknown objects. The method and the code, we propose in this work, consist of reconstructing an object model through the stereo vision system of the robot and using this information to compute a suitable grasping pose. We want to contribute in this direction, detailing the code we designed for implementing our grasping approach for a possible user interested in executing our technique on the robot

MODELING AND GRASPING VIA SUPERQUADRIC MODELS
CODE STRUCTURE
Superquadric-Model
SuperqComputation The SuperqComputation thread includes the following steps:
Superquadric-Grasping
GraspComputation This class handles the pose candidates’ computation:
How to Use the Superquadric Framework
KNOWN ISSUES
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call