Abstract

The ability to use tools can significantly increase the range of activities that an agent is capable of. Humans start using external objects since an early age to accomplish their goals, learning from interaction and observation the relationship between the objects used, their own actions, and the resulting effects, i.e., the tool affordances. Robots capable of autonomously learning affordances in a similar self-supervised way would be far more versatile and simpler to design than purpose-specific ones. This paper proposes and evaluates an approach to allow robots to learn tool affordances from interaction, and generalize them among similar tools based on their 3-D geometry. A set of actions is performed by the iCub robot with a large number of tools grasped in different poses, and the effects observed. Tool affordances are learned as a regression between tool-pose features and action-effect vector projections on respective self-organizing maps, which enables the system to avoid categorization and keep gradual representations of both elements. Moreover, we propose a set of robot-centric 3-D tool descriptors, and study their suitability for interaction scenarios, comparing also their performance against features derived from deep convolutional neural networks. Results show that the presented methods allow the robot to predict the effect of its tool use actions accurately, even for previously unseen tool and poses, and thereby to select the best action for a particular goal given a tool-pose.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call