Abstract

Graph neural network (GNN) is a powerful representation learning framework for graph-structured data. Some GNN-based graph embedding methods, including variational graph autoencoder (VGAE), have been presented recently. However, existing VGAE-based methods typically focus on reconstructing the adjacent matrix, i.e. topological structure, instead of the node features matrix, this strategy makes graphical features difficult to be fully learned, which weakens and restricts the capacity of a generative network to learn higher-quality representations. To address the issue, we use a contrastive estimator on the representation mechanism, i.e. on the encoding process under the framework of VGAE. In particular, we maximize the mutual information (MI) between encoded latent representation and node attributes which acts as a regularizer forcing the encoder to select the most informative with respect to the node attributes. Additionally, we also solve another key question how to effectively estimate the mutual information by drawing samples from the joint and marginal, and explain why the maximization of MI can contribute to the encoder obtaining more node feature information. Ultimately, extensive experiments on three citation networks and four web-age networks show that our method outperforms contemporary popular algorithms (such as DGI) on node classifications and clustering tasks, and the best result is an [Formula: see text] increase on node clustering task.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call