Abstract
Visual and tactile sensing are complementary factors in the task of robotic grasping. In this paper, a grasp detection deep network is first proposed to detect the grasp rectangle from the visual image, then a new metric using tactile sensing is designed to assess the stability of the grasp. By means of this scheme, a THU grasp dataset, which includes the visual information, corresponding tactile and grasp configurations, is collected to train the proposed deep network. Experiments results have demonstrated that the proposed grasp detection deep networks outperform other mainstream approaches in a public grasp dataset. Furthermore, the grasp success rate can be improved significantly in real world scenarios. The trained model has also been successfully implemented in a new robotic platform to perform the robotic grasping task in a cluttered scenario.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.