Abstract

Grasping unseen objects has become one of the most critical abilities of robots, where identifying the feasible grasp locations is the key technique. Previously proposed methods generally rely on grid search strategy, which evaluates grasp candidates at regular interval, or directly adopt the deep convolutional network, which requires large datasets and powerful computational ability. In this article, we propose an edge-based grasp detection strategy, which combines low-level features and a lightweight convolutional neural network (CNN). Specifically, two grip criteria are first introduced to select feasible point pairs and determine the corresponding grasp candidates. Then, a lightweight CNN model is rapidly trained with limited number of samples in order to recognize feasible grasps. Our method does not require additional sensor information and can work well with only RGB image. Comparative experiments on public dataset verify the effectiveness of our efficient grasp search, which is superior to existing grasp search strategies in accuracy and efficiency. Meanwhile, it achieves competitive results with traditional CNN-based methods using less training time. In addition, we also test the proposed approach under real-world robotic grasping scenario.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call