Abstract

Graph representation learning has attracted a surge of interest recently, which targets learning discriminant representation for each node in the graph. Most of these representation methods focus on supervised learning and heavily depend on label information. However, annotating graphs are expensive in the real-world, especially in specialized domains (i.e. biology), as it requires the annotators with the domain knowledge to label the graph. To approach this problem, self-supervised learning provides a feasible solution for graph representation learning. In this paper, we propose a Multi-Level Graph Contrastive Learning (MLGCL) framework for learning robust representation of graph data by contrasting space views of graphs. Specifically, we introduce a novel contrastive view — space view. The original graph is a first-order approximation structure in the topological space where nodes are linked by feature similarity, relationship, etc. While the k-nearest neighbor (kNN) graph with community structure generated by encoding features preserves high-order proximity in feature space, it not only provides a complementary graph to the original graph from the feature space view but also is suitable for GNNs encoder. Furthermore, we develop a multi-level contrastive mode to preserve the local similarity and semantic similarity of graph-structured data simultaneously. Extensive experiments indicate MLGCL achieves promising results compared with the existing state-of-the-art graph representation learning methods on seven node classification datasets and three graph classification datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call