Abstract
Unsupervised graph representation learning methods based on contrastive learning have drawn increasing attention and achieved promising performance. Most of these methods only model instance-level feature similarity while ignoring the underlying semantic structure of the whole data. In this paper, we propose a Graph Prototypical Contrastive Learning (GPCL) framework for unsupervised graph representation learning. Besides modeling instance-level feature similarity, GPCL explores the underlying semantic structure of the whole data. Specifically, we introduce an instance-prototype contrastive objective to learn representations that are invariant to intra-class variance and discriminative to inter-class variance. Meanwhile, a prototype-prototype contrastive objective is proposed to encourage clustering consistency between instances in the same cluster and their augmentations. To optimize the model, we formulate GPCL as an online Expectation–Maximization framework. We iteratively perform E-step to estimate the posterior probability of prototype assignments through online clustering and M-step to optimize model parameters through graph prototypical contrastive learning. We evaluate GPCL on various graph benchmarks, and the experimental results verify the superiority of our method.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.