Abstract
AbstractAbstract‐3D point cloud registration is a crucial topic in the reverse engineering, computer vision and robotics fields. The core of this problem is to estimate a transformation matrix for aligning the source point cloud with a target point cloud. Several learning‐based methods have achieved a high performance. However, they are challenged with both partial overlap point clouds and multiscale point clouds, since they use the singular value decomposition (SVD) to find the rotation matrix without fully considering the scale information. Furthermore, previous networks cannot effectively handle the point clouds having large initial rotation angles, which is a common practical case. To address these problems, this paper presents a learning‐based point cloud registration network, namely HDRNet, which consists of four stages: local feature extraction, correspondence matrix estimation, feature embedding and fusion and parametric regression. HDRNet is robust to noise and large rotation angles, and can effectively handle the partial overlap and multi‐scale point clouds registration. The proposed model is trained on the ModelNet40 dataset, and compared with ICP, SICP, FGR and recent learning‐based methods (PCRNet, IDAM, RGMNet and GMCNet) under several settings, including its performance on moving to invisible objects, with higher success rates. To verify the effectiveness and generality of our model, we also further tested our model on the Stanford 3D scanning repository.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.