Abstract

Whereas contrastive learning eliminates the need for labeled data, existing methods may suffer from inadequate features due to the conventional single shared encoder structure and struggle to fully harness the rich spectrum of 3D augmentations. In this paper, we propose TriCI, a self-supervised method that designs a triple-branch contrastive learning architecture. During contrastive pre-training, we generate three augmented versions of each input point cloud sample and pair each augmented sample with the original one, resulting in three unique positive pairs. We subsequently feed the pairs into three distinct encoders, each of which extracts features from its corresponding input positive pair. We design a novel cross-branch contrastive loss and use it along with the intra-branch contrastive loss to jointly train our network. The proposed cross-branch loss effectively aligns the output features from different perspectives for pre-training and facilitates their integration for downstream tasks, particularly in object-level scenarios. The intra-branch loss helps maximize the feature correspondences within positive pairs. Extensive experiments demonstrate the superiority of our TriCI in self-supervised learning, and show its strong ability in enhancing the performance of downstream object classification and part segmentation tasks. Interestingly, our TriCI achieves a 92.9% accuracy for linear SVM evaluation on ModelNet40, exceeding its closest competitor by 1.7% and even exceeding some supervised methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.