Abstract

To successfully grasp an object humans use visual, tactile and kinaesthetic information together with prior knowledge about the object. When grasping an object the forces and torques applied to the surface with each finger must ensure the object does not move even in the presence of perturbations. Despite massive advances in computer vision for robotics, visually guided grasping remains a big challenge, but robot grasping does greatly benefit from the integration of multiple sensing modalities that can be used estimate applied forces. This paper presents a heuristic approach to grasping based on the combination of a position and a contact force controllers. Vision is used to identify and locate the objects to grasp while tactile sensing is used to perform a stable grasp using as position reference points inside the object and a priori estimations of appropriate contact forces. We tested our grasping system using a Kinect RGBD camera and a Shadow robot hand with BioTAC tactile sensors in three fingers, which was able to grasp ten unknown objects on a tabletop scenario. The proposed controller successfully adapted to the different shapes of the objects providing stable grasps in ∼0.5 seconds from contact.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call