Abstract

We propose a novel algorithm for unsupervised graph representation learning with attributed graphs. It combines three advantages addressing some current limitations of the literature: (i) The model is inductive: it can embed new graphs without re-training in the presence of new data; (ii) The method takes into account both micro-structures and macro-structures by looking at the attributed graphs at different scales; (iii) The model is end-to-end differentiable: it is a building block that can be plugged into deep learning pipelines and allows for back-propagation. We show that combining a coarsening method having strong theoretical guarantees with mutual information maximization suffices to produce high quality embeddings. We evaluate them on classification tasks with common benchmarks of the literature. We show that our algorithm is competitive with state of the art among unsupervised graph representation learning methods.

Highlights

  • Graphs are a canonical way of representing objects and relationships among them

  • A recent take on that is known as graph representation learning [1]: the graphs are embedded in a fixed dimensional latent space such that similar graphs share similar embeddings

  • Loukas’s coarsening exhibits good properties in this regard; we show that the advantage of Graph Neural Networks (GNNs) over WL is to be continuous functions in node features

Read more

Summary

Introduction

Graphs are a canonical way of representing objects and relationships among them. They have proved remarkably well suited in many fields such as chemistry, biology, social sciences or computer science in general. Many real-life problems involving high dimensional objects and their links can be modeled using attributed graphs. Machine learning offers several ways to solve problems such as classification, clustering or inference, provided that a sufficient amount of training examples is available. The most classical frameworks are devoted to data living in regular spaces (e.g., vector spaces), and they are not suitable to deal with attributed graphs. One way to overcome this issue is to represent or encode the attributed graphs in such a way that usual machine learning approaches are efficient. A recent take on that is known as graph representation learning [1]: the graphs are embedded in a fixed dimensional latent space such that similar graphs share similar embeddings

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call