Abstract

The successful execution of grasping by a robot hand requires translation of visual information into control signals to the hand, which produce the desired spatial orientation and preshape for grasping an arbitrary object. An approach to this problem that is based on separation of the task into two modules is presented. A vision module is used to transform an image into a volumetric shape description using generalized cones. The data structure containing this geometric information becomes an input to the grasping module, which obtains a list of feasible grasping modes and a set of control signals for the robot hand. Features of both modules are discussed.< <ETX xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">&gt;</ETX>

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call