Self-supervised graph representation learning (SSGRL) has emerged as a promising approach for graph embeddings because it does not rely on manual labels. SSGRL methods are generally divided into generative and contrastive approaches. Generative methods often suffer from poor graph quality, while contrastive methods, which compare augmented views, are more resistant to noise. However, the performance of contrastive methods depends heavily on well-designed data augmentation and high-quality negative samples. Pure generative or contrastive methods alone cannot balance both robustness and performance. To address these issues, we propose a self-supervised graph representation learning method that integrates generative and contrastive ideas, namely Contrastive Generative Message Passing Graph Learning (CGMP-GL). CGMP-GL incorporates the concept of contrast into the generative model and message aggregation module, enhancing the discriminability of node representations by aligning positive samples and separating negative samples. On one hand, CGMP-GL integrates multi-granularity topology and feature information through cross-view multi-level contrast while reconstructing masked node features. On the other hand, CGMP-GL optimizes node representations through self-supervised contrastive message passing, thereby enhancing model performance in various downstream tasks. Extensive experiments over multiple datasets and downstream tasks demonstrate the effectiveness and robustness of CGMP-GL.
Read full abstract