Abstract

Graph representation learning has now become the de facto standard when dealing with graph-structured data. Using powerful tools from deep learning and graph neural networks, recent works have applied graph representation learning to time-evolving dynamic graphs and showed promising results. However, all the previous dynamic graph models require labeled samples to train, which might be costly to acquire in practice. Self-supervision offers a principled way of utilizing unlabeled data and has achieved great success in computer vision community. In this paper we propose debiased dynamic graph contrastive learning (DDGCL), the first self-supervised representation learning framework on dynamic graphs. The proposed model extends the contrastive learning idea to dynamic graphs via contrasting two nearby temporal views of the same node identity, with a time-dependent similarity critic. Inspired by recent theoretical developments contrastive learning, we propose a novel debiased GAN-type contrastive loss as the learning objective in order to correct the sampling bias occurred in negative sample construction process. We conduct extensive experiments on benchmark datasets via testing the DDGCL framework under two different self-supervision schemes: pretraining and finetuning and multi task learning. The results show that using a simple time-aware GNN encoder, the performance of downstream tasks is significantly improved under either scheme to closely match, or even outperform state-of-the-art dynamic graph models with more elegant encoder architectures. Further empirical evaluations suggest that the proposed approach offers more performance improvement than previously established self-supervision mechanisms over static graphs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call