Abstract

Graph neural networks (GNNs) have been successful in a variety of graph-based applications. Recently, it is shown that capturing long-range relationships between nodes helps improve the performance of GNNs. The phenomenon is mostly confirmed in a supervised learning setting. In this article, inspired by contrastive learning (CL), we propose an unsupervised learning pipeline, in which different types of long-range similarity information are injected into the GNN model in an efficient way. We reconstruct the original graph in feature and topology spaces to generate three augmented views. During training, our model alternately picks an augmented view, and maximizes an agreement between the representations of the view and the original graph. Importantly, we identify the issue of diminishing utility of the augmented views as the model gradually learns useful information from the views. Hence, we propose a view update scheme that adaptively adjusts the augmented views, so that the views can continue to provide new information that helps with CL. The updated augmented views and the original graph are jointly used to train a shared GNN encoder by optimizing an efficient channel-level contrastive objective. We conduct extensive experiments on six assortative graphs and three disassortative graphs, which demonstrate the effectiveness of our method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call