Abstract

Graph contrastive learning (GCL) has emerged as a powerful tool to address real-world widespread label scarcity problems and has achieved impressive success in the graph learning domain. Albeit their remarkable performance, most current works mainly focus on designing sample augmentation methods, while the effect of negative sample selection strategy is largely ignored by previous works but rather practical and significant for graph contrastive learning. In this paper, we study the impact of negative samples on learning graph-level representations, and innovatively propose a Reinforcement Graph Contrastive Learning (ReinGCL) for negative sample selection. To be concrete, our model consists of two major components: a graph contrastive learning framework (GCLF), and a selection distribution generator (SDG) for producing the selection probabilities based on RL. The key insight is that Re-inGCL attempts to leverage SDG to guide GCLF and narrow the divergence between the augmented positive pairs, so as to further improve graph representation learning. Extensive experiments demonstrate that our approach significantly yields superior performance compared to the state-of-the-art.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call