Abstract

Autonomous manipulation is a key issue for a humanoid robot. Here, we are interested in a vision-based grasping behavior so that the robot can deal with previously unknown objects in real time and in an intelligent manner. Starting from a number of feasible candidate grasps, we focus on the problem of predicting their reliability using the knowledge acquired in previous grasping experiences. A set of visual features which take into account physical properties that can affect the stability and reliability of a grasp are defined. A humanoid robot obtains its grasping experience by repeating a large number of grasping actions on different objects. An experimental protocol is established in order to classify grasps according to their reliability. A prediction/classification strategy is defined which allows the robot to predict the outcome of a grasp only analyzing its visual features. The results indicate that the defined features do characterize grasp quality and that the classification method is adequate to predict the reliability of a grasp.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call