Abstract

In recent years, contrastive learning has emerged as a successful method for unsupervised graph representation learning. It generates two or more different views by data augmentation and maximizes the mutual information between the views. Prior approaches usually adopt naive data augmentation strategies or ignore the rich global information of the graph structure, leading to suboptimal performance. This paper proposes a contrast-based unsupervised graph representation learning framework, MPGCL. Since data augmentation is the key to contrastive learning, this paper proposes constructing higher-order networks by injecting similarity-based global information into the original graph. Then, adaptive and random augmentation strategies are combined to generate two views with complementary semantic information, which preserve important semantic information while not being too similar. In addition, the previous methods only consider the same nodes as positive samples. In this paper, the positive samples are identified by capturing global information. In extensive experiments on eight real benchmark datasets, MPGCL outperforms both the SOTA unsupervised competitors and the fully supervised methods on the downstream task of node classification. The code is available at: https://github.com/asfdd3/-miao/tree/src/MPGCL .

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call