Abstract

Graph Neural Networks (GNNs) are powerful neural models for representation learning on graphs. In this work we study a self-supervised learning GNN method GraphSSGAN (Graph Self-Supervised Generative Adversarial Network) that learns generalized node representations using only unlabeled data. The core idea lies in learning representations that capture both the local information of the links and the global information of the plausibility of the subgraph samples. Specifically, node features of the original graph are first encoded into latent representations by optimizing a link prediction objective. Then the generator converts noise vectors to node representations in the latent space and predicts the link probabilities from the generated node representations. The discriminator leverages the graph convolutional network (GCN) architecture to produce permutation-invariant graph-level embeddings, and the intermediate node representations are used by simple classifiers in the downstream tasks. In addition, we introduce several technical tricks including Gumbel-Top-k trick, Gumbel-Softmax trick and mini-batch training via subgraph sampling to improve the training process. Through extensive experiments on node classification and link prediction tasks, we demonstrate the effectiveness of the proposed model and the contribution of GAN framework on graph representation learning.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call