Abstract

The proliferation of depth cameras and LiDAR sensors in actual industrial environments has fueled the pursuit of an effective and efficient 3D point cloud model that enables us to perceive and interact with the physical world. However, the intrinsic complexity of 3D semantic information poses significant challenges to model design, including spatial rotation invariance and irregular point cloud structure, which fundamentally impact the representation and behavior of 3D point cloud systems. Existing have either heavily relied on labeling information in a supervised learning setting or failed to effectively capture the inherent patterns of the 3D point clouds within a self-supervised learning framework, leading to poor performance in specific downstream tasks. To address these limitations, this paper introduces a self-supervised framework, Dual-Cross Contrastive Neural Network (DCCN) for 3D point cloud self-supervised representation learning. DCCN leverages cross-view, cross-network, and domain-specific knowledge distillation to enhance the extraction of hidden features from point clouds and fully exploit the capabilities of the encoder. Our DCCN employs a pseudo-Siamese network consisting of an online network and a target network, facilitating knowledge interaction and distillation. The method extracts internal states from augmented 3D point cloud by learning cross-view relationships and optimizes model parameters through intra-modal cross-network learning. We incorporate a momentum-updating mechanism without shared weights in the Siamese network architecture to distill knowledge and enhance the role differentiation the online and target networks. Experimental results demonstrate that our approach outperforms a range of supervised and self-supervised learning methods across a series of downstream tasks consisting of four tasks in three representative datasets. Ablation studies validate the component-wise effectiveness of cross-view, cross-network, and moment-updating learning objectives in achieving superior point cloud representation. The overall findings establish our method, DCCN, as an effective solution for 3D point cloud representation learning in real-world applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call