Abstract

Object detection based on deep learning is a popular trend, and it includes object recognition and positioning. This paper proposes a method that can accurately obtain object type and accurate three-dimensional position. The method is divided into three parts: object recognition and coarse positioning based on deep learning, precise positioning based on deep learning combined with B-spline level set in color images, and precise three-dimensional positioning with depth information of RGB-D camera. The precise positioning of the object provides accurate end pose information for the autonomous grasping of the robotic arm, and it has great significance to the gripping of the robotic arm. Performance metrics include mAP (mean average precision) and IOU (intersection of a union). Experimental results show that the mAP value of Yolo-v3 in this paper can reach 87.62%, the average IOU of Yolo-v3 in this paper can reach 66.74%, the average IOU of Yolo-v3 and B-spline level set can reach 100%, and can get accurate 3D location in the real scene. In addition, the experiments comparisons between VOC dataset and our own dataset validate that our dataset can take higher mAP and average IOU values.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.