Abstract

Graph convolutional networks (GCN) suffer from the over-smoothing problem, which causes most of the current GCN models to be shallow. Shallow GCN can only use a very small part of nodes and edges in the graph, which leads to over-fitting. In this paper, we propose a semi-supervised training method to solve this problem, and greatly improve the performance of GCN. Firstly, we propose an integrated data augmentation framework to conduct effective data augmentations for graph-structured data. Then consistency loss, entropy minimization loss, and graph loss are introduced to help GCN make full use of unlabeled nodes and edges, which alleviates the excessive dependence of the model on labeled nodes. Extensive experiments on three widely-used citation datasets demonstrate our method can achieve state-of-the-art performance in solving the semi-supervised node classification problem. Especially, we get $$85.52\%$$ accuracy on Cora with the public split.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call