Abstract

In recent years, the feature-based point cloud registration methods have attracted more attention. However, most existing methods focus on extracting features with strong antiinterference ability from a single point cloud while neglecting the differences within point cloud pairs. In this paper, unlike these methods treating each point cloud independently, we instead consider the information between point cloud pairs when extracting features. Specifically, we propose a cross-attention-based network for modeling the correlation between a pair of point clouds, where a 3D cross-attention mechanism is proposed and combined with 3D convolution elegantly for feature extraction. The extracted features achieve better robustness under various conditions, such as rotation and translation changes. Then accurate point cloud registration is achieved by matching these features. Experimental results on 3DMatch dataset show that the proposed method achieves state-of-the-art performance on feature matching and point cloud registration tasks compared with the previous feature-based methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call