Abstract

In service domain, there is a growing expectation that robots will be able to complete more tasks. Before a robot performing an operation on a target object, recognizing and grasping the object is an inevitable mission. In this paper, we propose a target grasping method based on semi-automated annotation approach. It is implemented by rapidly constructing a dataset containing 30 different placement scenarios of 18 daily items. By adopting the constructed dataset for training, we realize the object classification and grasping task in a new scene. It is developed by using an anchor-free framework to acquire a grasped category with a RGB-D camera for picking the target items in the cluttered scene. With the fixed RGB-D camera, our robot grasping classification pipeline is able to complete candidate grasp generation at a processing time of 66 ms per frame. Grasping experiments were performed to pick-up the targets of interest in the scenarios, where five to seven objects are randomly selected and placed with five repetitions. Experimental results show that the grasping success rate is up to 72% under a successful trajectory planning.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call