Abstract

Learning to grasp novel objects is a challenging issue for service robots, especially when the robot is performing goal-oriented manipulation or interaction tasks whilst only single-view RGB-D sensor data is available. While some visual approaches focus on grasping that satisfy force-closure standards only, we further link affordances-based task constraints to the grasp pose on object parts, so that both force-closure standard and task constraints can be ensured. In this paper, a new single-view approach is proposed for task-constrained grasp pose detection. We propose to learn a pixel-level affordance detector based on a convolutional neural network. The affordance detector provides a fine grained understanding of the task constraints on objects, which are formulated as a pre-segmentation stage in the grasp pose detection framework. The accuracy and robustness of grasp pose detection are improved by a novel method for calculating local reference frame as well as a position-sensitive fully convolutional neural network for grasp stability classification. Experiments on benchmark datasets have shown that our method outperforms the state-of-the-art methods. We have also validated our method in real-world and task-specific grasping scenes, in which higher success rate for task-oriented grasping is achieved.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call