Abstract

Soft hands are robotic systems that embed compliant elements in their mechanical design. This enables an effective adaptation with the items and the environment, and ultimately, an increase in their grasping performance. These hands come with clear advantages in terms of ease-to-use and robustness if compared with classic rigid hands, when operated by a human. However, their potential for autonomous grasping is still largely unexplored, due to the lack of suitable control strategies. To address this issue, in this letter, we propose an approach to enable soft hands to autonomously grasp objects, starting from the observations of human strategies. A classifier realized through a deep neural network takes as input the visual information on the object to be grasped, and predicts which action a human would perform to achieve the goal. This information is hence used to select one among a set of human-inspired primitives, which define the evolution of the soft hand posture as a combination of anticipatory action and touch-based reactive grasp. The architecture is completed by the hardware component, which consists of an RGB camera to look at the scene, a 7-DoF manipulator, and a soft hand. The latter is equipped with inertial measurement units at the fingernails for detecting contact with the object. We extensively tested the proposed architecture with 20 objects, achieving a success rate of 81.1% over 111 grasps.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.