Abstract

In this work, we tackle the challenging problem of grasping novel objects using a high-DoF anthropomorphic hand-arm system. Combining fingertip tactile sensing, joint torques and proprioception, a multimodal agent is trained in simulation to learn the finger motions and to determine when to lift an object. Binary contact information and level-based joint torques simplify transferring the learned model to the real robot. To reduce the exploration space, we first generate postural synergies by collecting a dataset covering various grasp types and using principal component analysis. Curriculum learning is further applied to adjust and randomize the initial object pose based on the training performance. Simulation and real robot experiments with dedicated initial grasping poses show that our method outperforms two baseline models in the grasp success rate both for seen and unseen objects. This learning approach further serves as a fundamental technology for complex in-hand manipulations based on multi-sensory the system.

Highlights

  • E VEN though the two-fingered grasping problem has been widely studied and reaches a satisfying success rate, multifingered grasping is still far from solved

  • We introduce a multifingered grasping agent that fuses multimodal sensor data based on the reinforcement learning algorithm

  • In our multifingered grasping task, a training episode is terminated after the lifting attempt and a binary reward ∈ {0, 1} representing whether the robot picked up the object successfully is returned

Read more

Summary

INTRODUCTION

E VEN though the two-fingered grasping problem has been widely studied and reaches a satisfying success rate, multifingered grasping is still far from solved. Inspired by how humans grasp objects, we develop a robust robotic grasping strategy by merging multiple sensing modalities with anthropomorphic dexterous hands. The fusion of different data from tactile fingertips, torque sensors, and robot proprioception (joint positions) promises a more robust and intelligent method to teach the robot multifingered grasping. Action space to train a multimodal reinforcement learning (RL) based agent. With this agent, the dexterous robotic hand can close its fingers and grasp the object successfully. We introduce a multifingered grasping agent that fuses multimodal sensor data (fingertip tactile sensing, joint torques, and hand proprioception) based on the reinforcement learning algorithm. Our robot experiments prove that our agent trained in simulation works well on the real robot system and outperforms the baseline methods;. Through the comparison of model with different modalities and different baselines, we verified the effectiveness of our proposed algorithm

Multifingered grasping
Dimensional reduction for multifingered hands
GRASP SYNERGIES DATASET
MULTIMODAL GRASPING POLICY
Simulation environment
Observations
Actions
Reward
Curriculum Learning
EXPERIMENT
Simulation Results
Initial grasp generation for real robot experiments
Sensor mapping
Real Robot Verification
Findings
CONCLUSION AND FUTURE WORK

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.