Abstract

Secure grasping of objects in complex scenes is the foundation of many tasks. It is important for robots to autonomously determine the optimal grasp based on visual information, which requires reasoning about the stacking relationship of objects and detecting the grasp position. This paper proposes a multi-task secure grasping detection model, which consists of the grasping relationship network (GrRN) and the oriented rectangles detection network CSL-YOLO, which uses circular smooth label (CSL). GrRN uses DETR to solve set prediction problems in object detection, enabling end-to-end detection of grasping relationships. CSL-YOLO uses classification to predict the angle of oriented rectangles, and solves the angle distance problem caused by classification. Experiments on the Visual Manipulate Relationship Dataset (VMRD) and the grasping detection dataset Cornell demonstrate that our method outperforms existing methods and exhibits good applicability on robot platforms.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.