Abstract
When interacting with the external environment, dexterous hands face various challenges, including interference in cluttered environments and difficulty in accurately locating target objects. We observe that the human hand is often at ease in the face of complex unstructured environments: pushing objects that can rearrange clutter, locating target objects, and creating space for fingers. In addition, the gripping action complements the push by achieving the precise movement of unrelated objects. Taking inspiration from this, to enable manipulators to perform accurate grasping tasks in complex environments, we propose a new data-driven grasping approach for robotic dexterous hands: the push-fit grasping approach. This method combines human grasping with model-free deep reinforcement learning to realize the robot's collaborative grasping ability. An end-to-end conditional converter grasping network is first trained, which takes the visual point cloud input to the action output. We use DQN algorithms for strategy training, further enhanced by fine-tuning methods for real-world learning. The experimental results show that the push strategy we learned significantly improves the grasping performance, and the success rate of cooperative grasping is increased by 8 %. It is worth noting that this fine-tuning method significantly reduces the actual training time, with the real world training taking up only one-fifth of the time in the simulated world. Even in demanding scenarios characterized by confusion and complexity, it facilitates rapid learning. This study provides new insights for effectively addressing complex challenges involving multi-fingered dexterous hands and human-computer interaction. Our code is available in the github repository.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.