Abstract
Although many recent deep learning methods have achieved good performance in point cloud analysis, most of them are built upon the heavy cost of manual labeling. Unsupervised representation learning methods have attracted increasing attention due to their high label efficiency. How to learn more useful representations from unlabeled 3D point clouds is still a challenging problem. Addressing this problem, we propose a novel unsupervised learning approach for point cloud analysis, named ULD-Net, consisting of an equivariant-crop (equiv-crop) module to achieve dense similarity learning. We propose dense similarity learning that maximizes consistency across two randomly transformed global-local views at both the instance level and the point level. To build feature correspondence between global and local views, an equiv-crop is proposed to transform features from the global scope to the local scope. Unlike previous methods that require complicated designs, such as negative pairs and momentum encoders, our ULD-Net benefits from the simple Siamese network that relies solely on stop-gradient operation preventing the network from collapsing. We also utilize the feature separability constraint for more representative embeddings. Experimental results show that our ULD-Net achieves the best results of context-based unsupervised methods and comparable performances to supervised models in shape classification and segmentation tasks. On the linear support vector machine classification benchmark, our ULD-Net surpasses the best context-based method spatiotemporal self-supervised representation learning (STRL) by 1.1% overall accuracy. On tasks with fine-tuning, our ULD-Net outperforms STRL under fully supervised and semisupervised settings, in particular, 0.1% accuracy gain on the ModelNet40 classification benchmark, and 0.6% medium intersection of union gain on the ShapeNet part segmentation benchmark.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.