Abstract

Graph neural networks (GNNs) have demonstrated superior performance in various tasks on graphs. However, existing GNNs often suffer from weak-generalization due to sparsely labeled datasets. Here we propose a novel framework that learns to augment the input features using topological information and automatically controls the strength of augmentation. Our framework learns the augmentor to minimize GNNs’ loss on unseen labeled data while maximizing the consistency of GNNs’ predictions on unlabeled data. This can be formulated as a meta-learning problem and our framework alternately optimizes the augmentor and GNNs for a target task. Our extensive experiments demonstrate that the proposed framework is applicable to any GNNs and significantly improves the performance of graph neural networks on node classification. In particular, our method provides 5.78% improvement with Graph convolutional network (GCN) on average across five benchmark datasets.

Highlights

  • Graph neural networks (GNNs) [1] have been widely used for representation learning on graph-structured data due to their superior performance in various applications such as node classification [2]–[4], link prediction [5]–[8] and graph classification [9]–[11]

  • Our framework learns topology-aware input feature transformations and adaptively controls the strength of augmentation based on a GNN’s performance. It is formulated as a meta-learning problem and our framework explicitly maximizes the generalization power of graph neural networks that are trained on augmented data

  • Note that we study our augmentor in the context of Test-time Augmentation (TTA) [49]–[52] so the augmentor is be optimized by minimizing the loss on augmented data X instead of the original data X

Read more

Summary

INTRODUCTION

Graph neural networks (GNNs) [1] have been widely used for representation learning on graph-structured data due to their superior performance in various applications such as node classification [2]–[4], link prediction [5]–[8] and graph classification [9]–[11]. Our framework learns topology-aware input feature transformations and adaptively controls the strength of augmentation based on a GNN’s performance It is formulated as a meta-learning problem and our framework explicitly maximizes the generalization power of graph neural networks that are trained on augmented data. META-LEARNING FOR GRAPHS Meta-learning has shown success in diverse tasks [38] and there are some works applying meta-learning to data augmentation on images [39]–[41] METHOD We present a novel framework (AugGCR) that learns data Augmentation for GNNs with the Consistency Regularization to enhance the generalization power of GNNs. Our framework includes 1) an augmentor that generates augmented input features taking into account graph topologies and 2) a novel learning strategy to alternately train a GNN and VOLUME 9, 2021. The augmentor avoiding both overfitting and meta-overfitting to scarce label information

TOPOLOGY-AWARE AUGMENTATION
EXPERIMENT
Findings
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call