Abstract

Graph Attention Network (GAT) is one of the state-of-the-art architectures for Graph Neural Networks (GNNs). In this paper, we first propose Label Purity to explore the relationship between the graph attention and the node labels. By tracking the label purity of graph attention, we observe that graph attention suppresses message passing between inter-class nodes in the graph with homophily. Upon such basis, we further improve graph attention's capability to capture label information by designing a self-supervised graph attention loss. We propose a novel approach based on self-supervised graph attention loss, Co-propagation Self-supervised Graph Attention Network (CopGAT), which explicitly restrains message passing between inter-class nodes. The Co-propagation means propagating node representations in Euclidean space and pseudo labels in probability simplex via the shared attention matrix, combining Message Passing Neural Networks and Label Propagation Algorithm under the graph attention framework. Constrained with the self-supervised graph attention loss generated by co-propagation, our CopGAT enhances the ability of GAT to attenuate the influence of structural noise. Furthermore, our self-supervised graph attention loss can be extended to other graph attention models in a plug-and-play manner. Experimental results demonstrate that our Cop GAT outperforms previous state-of-the-art methods on five benchmark datasets for the semi-supervised node classification task.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call