Abstract
Contrastive learning has become a successful approach for learning powerful text and image representations in a self-supervised manner. Contrastive frameworks learn to distinguish between representations coming from augmentations of the same data point (positive pairs) and those of other (negative) examples. Recent studies aim at extending methods from contrastive learning to graph data. In this work, we propose a general framework for learning node representations in a self supervised manner called Graph Constrastive Learning (GraphCL). It learns node embeddings by maximizing the similarity between the nodes representations of two randomly perturbed versions of the same graph. We use graph neural networks to produce two representations of the same node and leverage a contrastive learning loss to maximize agreement between them. We investigate different standard and new negative sampling strategies as well as a comparison without negative sampling approach. We demonstrate that our approach significantly outperforms the state-of-the-art in unsupervised learning on a number of node classification benchmarks in both transductive and inductive learning setups.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.