Abstract

Category-level 6D object pose estimation aims to predict the position and orientation of unseen object instances, which is a fundamental problem in robotic applications. Previous works mainly focused on exploiting visual cues from RGB images, while depth images received less attention. However, depth images contain rich geometric attributes about the object’s shape, which are crucial for inferring the object’s pose. This work achieves category-level 6D object pose estimation by performing sufficient geometric learning from depth images represented by point clouds. Specifically, we present a novel geometric consistency and geometric discrepancy learning framework called CD-Pose to resolve the intra-category variation, inter-category similarity, and objects with complex structures. Our network consists of a Pose-Consistent Module and a Pose-Discrepant Module. First, a simple MLP-based Pose-Consistent Module is utilized to extract geometrically consistent pose features of objects from the pre-computed object shape priors for each category. Then, the Pose-Discrepant Module, designed as a multi-scale region-guided transformer network, is dedicated to exploring each instance’s geometrically discrepant features. Next, the NOCS model of the object is reconstructed according to the integration of consistent and discrepant geometric representations. Finally, 6D object poses are obtained by solving the similarity transformation between the reconstruction and the observed point cloud. Experiments on the benchmark datasets show that our CD-Pose produces superior results to state-of-the-art competitors.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call