Point cloud registration is one of the fundamental tasks in computer vision, but faces challenges under low overlap conditions. Recent approaches use transformers and overlapping masks to improve perception, but mask learning only considers Euclidean distances between features, ignores mismatches caused by fuzzy geometric structures, and is often computationally inefficient. To address these issues, we introduce a novel matching framework. Firstly, we fuse adaptive graph convolution with PPF features to obtain rich feature perception. Subsequently, we construct a PGT framework that uses GeoTransformer and combines it with location information encoding to enhance the geometry perception between source and target clouds. In addition, we improve the visibility of overlapping regions through information exchange and the AIS module, aiming at subsequent keypoint extraction, preserving points with distinct geometrical structures while suppressing the influence of non-overlapping regions to improve computational efficiency. Finally, the mask is refined through contrast learning to preserve geometric and distance similarity, which helps to compute the transformation parameters more accurately. We have conducted comprehensive experiments on synthetic and real-world scene datasets, demonstrating superior registration performance compared to recent deep learning methods. Our approach shows remarkable improvements of 68.21% in RRMSE and 76.31% in tRMSE on synthetic data, while also excelling in real-world scenarios with enhancements of 76.46% in RRMSE and 45.16% in tRMSE.
Read full abstract