Abstract
In this article, a novel grasp detection network, efficient grasp detection network (EGNet), is proposed to deal with the grasp challenging in stacked scenes, which complete the tasks of the object detection, grasp detection, and manipulation relationship detection. On the object detection, the EGNet takes the idea from the EfficientDet, and some hyperparameters are modified to help the robot complete the task of object detection and classification. In the part of grasping detection, a novel grasp detection module is proposed, which takes the feature map from bidirectional feature pyramid network (BiFPN) as input, and outputs the grasp position and its quality score. In the part of manipulation relation analysis, it takes the feature map from BiFPN, object detection, and the grasp detection, and outputs the best grasping position and appropriate manipulation relationship. The EGNet is trained and tested on the visual manipulation relationship dataset and Cornell dataset, and the detection accuracy are 87.1% and 98.9%, respectively. Finally, the EGNet is also tested in the practical scene by a grasp experiment on the Baxter robot. The grasp experiment is performed in the cluttered and stacked scene, and gets the success rate of 93.6% and 69.6%, respectively.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.