Abstract
3D vision perception, especially point clouds classification is fundamental and popular in safety-critical systems such as autonomous driving and robotics automation control. However, their robustness against incomplete partial point clouds in practical scenes is less studied, limiting real-world deployment. In this paper, we propose to improve the robustness and generalization of 3D models on partial point clouds by self-supervised latent feature learning. Unlike those data augmentation methods that generate partial point clouds by geometric transforms in coordinate space (e.g drop local structure or remove global points), we regard the partial data as a transformation in latent feature space. We explicitly learn the perspective transformation of partial point clouds and implicitly learn the occlusion transformation in the latent feature space by self-supervised learning. Different from previous methods that are validated on generated data, we test our method on point clouds completion datasets (e.g. PCN and MVP) which contain both complete and partial point clouds and have been widely used. Extensive experiments show that our proposed method consistently improves the robustness of state-of-the-art methods on the partial point clouds dataset.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.