Abstract

Graph representation learning aims to learn low-dimensional representation for the graph, which has played a vital role in real-world applications. Without requiring additional labeled data, contrastive learning based graph representation learning (or graph contrastive learning) has attracted considerable attention. Recently, one of the most exciting advancement in graph contrastive learning is Deep Graph Infomax (DGI), which maximizes the Mutual Information (MI) between the node and graph representations. However, DGI only considers the contextual node information, ignoring the intrinsic node information (i.e., the similarity between node representations in different views). In this paper, we propose a novel Cross-scale Contrastive Triplet Networks (CCTN) framework, which captures both contextual and intrinsic node information for graph representation learning. Specifically, to obtain the contextual node information, we utilize an infomax contrastive network to maximize the MI between node and graph representations. For acquiring the intrinsic node information, we present a Siamese contrastive network by maximizing the similarity between node representations in different augmented views. Two contrastive networks learn together through a shared graph convolution network to form our cross-scale contrastive triplet networks. Finally, we evaluate CCTN on six real-world datasets. Extensive experimental results demonstrate that CCTN achieves state-of-the-art performance on node classification and clustering tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call