Abstract

Secure grasping of objects in complex scenes is the foundation of many tasks. It is important for robots to autonomously determine the optimal grasp based on visual information, which requires reasoning about the stacking relationship of objects and detecting the grasp position. This paper proposes a multi-task secure grasping detection model, which consists of the grasping relationship network (GrRN) and the oriented rectangles detection network CSL-YOLO, which uses circular smooth label (CSL). GrRN uses DETR to solve set prediction problems in object detection, enabling end-to-end detection of grasping relationships. CSL-YOLO uses classification to predict the angle of oriented rectangles, and solves the angle distance problem caused by classification. Experiments on the Visual Manipulate Relationship Dataset (VMRD) and the grasping detection dataset Cornell demonstrate that our method outperforms existing methods and exhibits good applicability on robot platforms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call