Abstract

Graph representation learning is an effective tool for facilitating graph analysis with machine learning methods. Most GNNs, including Graph Convolutional Networks (GCN), Graph Recurrent Neural Networks (GRNN), and Graph Auto-Encoders (GAE), employ vectors to represent nodes in a deterministic way without exploiting the uncertainty in hidden variables. Deep generative models are combined with GAE in the Variational Graph Auto-Encoder (VGAE) framework to address this issue. While traditional VGAE-based methods can capture hidden and hierarchical dependencies in latent spaces, they are limited by the data’s multimodality. Here, we propose the Gaussian Mixture Model (GMM) to model the prior distribution in VGAE. Furthermore, an adversarial regularization is incorporated into the proposed approach to ensure the fruitful impact of the latent representations on the results. We demonstrate the performance of the proposed method on clustering and link prediction tasks. Our experimental results on real datasets show remarkable performance compared to state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call