Abstract

6D pose estimation has garnered significant attention and research. RGB images and point clouds converted from RGB-D images provide complementary color and geometry information, making them the mainstream data sources for object 6D pose estimation. However, due to the fact that RGB image and point cloud belong to different dimensional spaces and have different distribution characteristics, the fusion of these two complementary data sources remains a key technical challenge for 6D pose estimation. In contrast to prior approaches that simply concatenate separately processed RGB images and point clouds, this work introduces a Transformer-based multi-modal fusion network to address this challenge. More precisely, We build a Transformer architecture based pixel-wise feature extraction to optimize feature extraction from RGB images and point clouds. Subsequently, we investigate various multi-modal feature fusion methods to process these features, enabling deeper fusion of complementary data. Additionally, during the experimental phase, we design a 6D pose estimation network based on depth prediction to assess the impact of point cloud accuracy on the multi-modal fusion module. Finally, the proposed method is verified on four datasets: LineMOD, Occlusion Linemod, MP6D and YCB-Video. Experimental results show that the proposed method outperforms similar methods on these datasets.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.