Abstract

Point cloud registration plays an essential role in many areas, such as computer vision and robotics. However, traditional feature-based registration requires handcrafted descriptors for various scenarios, which is of low efficiency and flexibility; ICP and its locally optimal variants are sensitive to initialization, while globally optimal methods are of high computational time to overcome noise, outliers, and partial overlap. Learning-based registration can automatically and flexibly learn shape representation for different objects, but existing methods are of either low efficiency or low precision, and poorly perform in partial-to-partial point cloud registration. Thus, we present a simple spatial and channel attention based network, named SCANet, for partial-to-partial point cloud registration. A spatial self-attention aggregation (SSA) module is applied in a feature extraction sub-network to efficiently make use of the inter and global information of each point cloud in different levels, while a channel cross-attention regression (CCR) module is adopted in a pose estimation sub-network for information interaction between two input global feature vectors, enhancing relevant information and suppressing redundant information. Experimental results show that our SCANet achieves state-of-the-art performances in both accuracy and efficiency compared to existing non-deep learning and learning-based methods on partial visibility with Gaussian noise. Our source code is available at the project website https://github.com/zhouruqin/SCANet.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call