Abstract
6D pose estimation is an important branch in the field of vision measurement, and is widely used in the fields of robotics, autonomous driving and reality augmentation. The latest research trend in 6D pose estimation is to train a deep neural network to directly predict the 2D projection position of the 3D keypoint from the image, establish the corresponding relationship, and finally use the perspective-n-point (PnP) algorithm to perform pose estimation. The current challenge of pose estimation is that when objects are textureless, occluded or scene-clutterd, the detection accuracy is reduced, and most of the existing algorithm models are large and cannot accommodate real-time requirements. In this paper, we introduce a densely connected feature pyramid network (DFPN) that can efficiently integrate and utilize features. We combine the cross-stage partial network (CSPNet) with DFPN to design a new network for 6D pose estimation, DFPN-6D, a new approach for 6D object pose estimation. DFPN-6D can efficiently and accurately handle objects with textureless, occluded and scene clutter and estimate their full 6D poses in a single shot. Furthermore, we propose a new confidence calculation method and loss function for object pose estimation, which can fully consider spatial information. Finally, we propose a novel augmentation method for direct 6D pose estimation approaches to improve performance and generalization ability in the case of occlusion, which is called 6D augmentation. Our approach achieves a new state-of-the-art accuracy of 98.06 and 87.09 in terms of the ADD(-S) metric on the Linemod dataset and Occluded-Linemod dataset, and our method also achieves the best result in terms of the different metric on the MULT-I dataset, BIN-P dataset and T-LESS dataset, respectively, while still running end-to-end at over 65 FPS. The experimental results demonstrate that our algorithm is robust to textureless materials and occlusion while running more efficiently than other methods. We also deploy our proposed method to a real robot to grasp and manipulate objects based on the estimated pose.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Instrumentation and Measurement
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.