Abstract

• Graph-based neural network models exploiting multiple self-supervised auxiliary tasks. • We propose three new self-supervised auxiliary tasks for graph-based neural networks. • Vertex features autoencoding. • Corrupted vertex features reconstruction. • Corrupted vertex embeddings reconstruction. Self-supervised learning is currently gaining a lot of attention, as it allows neural networks to learn robust representations from large quantities of unlabeled data. Additionally, multi-task learning can further improve representation learning by training networks simultaneously on related tasks, leading to significant performance improvements. In this paper, we propose three novel self-supervised auxiliary tasks to train graph-based neural network models in a multi-task fashion. Since Graph Convolutional Networks are among the most promising approaches for capturing relationships among structured data points, we use them as a building block to achieve competitive results on standard semi-supervised graph classification tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call