Abstract

Deep learning based 3D point cloud classification and segmentation has achieved remarkable success. Existing methods are usually implemented in the original space with 3D coordinates as inputs. However, we find that point networks taking only information of first-order coordinates hardly learn geometric features of higher order, such as point cloud normals or poses. In this study, we propose to map the input point clouds into a non-linear space to facilitate networks learning and leveraging high-order features. Firstly, we design the Parametric Veronese Mapping (PVM) function which automatically learns to map point clouds into a non-linear space. As a result, the mapped point clouds are enriched with high-order elements and maintain the basic point set properties as in the original 3D space. We can then exploit existing networks to learn high-order features from mapped point clouds. Secondly, we contribute a two-stage transformation learning module that modifies the previous one-stage module to better leverage high-order features for aligning point clouds in the projective space. Finally, an interaction module is designed to learn more discriminative features by aggregating information from both the original and projective space. Extensive experiments demonstrate that our method successfully improves the ability of most existing networks to learn high-order features and thus contributing to more accurate classification and segmentation. Moreover, the resulting models show stronger robustness to affine transformations and real-world perturbations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call