Abstract

This paper proposes GraspCNN, an approach to grasp detection where a feasible robotic grasp is detected as an oriented diameter circle in RGB image, using a single convolutional neural network. By detecting robotic grasps as oriented diameter circles, grasp representation is thereby simplified. In addition to our novel grasp representation, a grasp pose localization algorithm is proposed to project an oriented diameter circle back to a 6D grasp pose in point cloud. GraspCNN predicts feasible grasping circles and grasp probabilities directly from RGB image. Experiments show that GraspCNN achieves a 96.5% accuracy on the Cornell Grasping Dataset, outperforming existing one-stage detectors for grasp detection. GraspCNN is fast and stable, which can process RGB image at 50 fps and meet the requirements of real-time applications. To detect objects and locate feasible grasps simultaneously, GraspCNN is executed in parallel with YOLO, which achieves outstanding performance on both object detection and grasp detection.

Highlights

  • The goal of 2D grasp detection is to localize feasible grasps in the images of objects

  • RGB image has been widely used for 2D object detection

  • Some grasp detection methods are a two-stage cascaded system based on deep learning which detect objects in the first stage and each cropped object region is sent to a second stage network to predict a feasible robotic grasp for this specified object

Read more

Summary

Introduction

The goal of 2D grasp detection is to localize feasible grasps in the images of objects. RGB image has been widely used for 2D object detection Convolutional neural networks, such as YOLO [1]–[3], SSD [4], Mask RCNN [5] and CornerNet [6], have achieved great success. Some grasp detection methods are a two-stage cascaded system based on deep learning which detect objects in the first stage and each cropped object region is sent to a second stage network to predict a feasible robotic grasp for this specified object. These complex pipelines are very slow and hard to optimize

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.