Abstract

Graph Neural Networks (GNNs) have exhibited significant success in various applications, but they face challenges when labeled nodes are limited. A novel self-supervised learning paradigm has emerged, enabling GNN training without labeled nodes and even surpassing GNNs with limited labeled data. However, self-supervised methods lack class-discriminative node representations due to the absence of labeled information during training. In this paper, we exploit a supervised graph contrastive learning approach (SGCL) framework to tackle the issue of limited labeled nodes, ensuring coherent grouping of nodes within the same class. We propose augmentation techniques based on a novel centrality function to highlight important topological structures. Additionally, we introduce a supervised contrastive learning method that removes the necessity for negative samples and simplifies complex elements effortlessly. Our approach combines supervised contrastive loss and node similarity regularization while achieving consistent grouping of unlabeled nodes with labeled ones. Furthermore, we utilize the pseudo-labeling technique to propagate label information to distant nodes and address the underfitting problem, especially with low-degree nodes. Experimental results on real-world graphs demonstrate that SGCL outperforms both semi-supervised and self-supervised methods in node classification.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call