Abstract

Despite the impressive results achieved in many areas of graph machine learning, through graph representation learning using supervised learning techniques, the limited availability of labeled training data has led to a bottleneck in terms of performance. To address this challenge, transfer learning has been proposed as an effective solution. It involves designing pre-training methods in an unsupervised manner to learn representations, which are then adapted to downstream tasks with limited labeled data. However, transfer learning can suffer from negative transfer when there is a major gap between the objectives of pre-training and the downstream tasks. To overcome these challenges, we introduce a novel framework, graph prompt learning-graph neural network (GPL-GNN), to narrow the gap between different tasks. GPL-GNN employs unsupervised methods, which require no labeled data, and incorporates unsupervised pre-trained structural representations into downstream tasks as prompt information. This information is combined with downstream data to train GNNs adapting them to the downstream tasks, and resulting in more adaptive, task-specific representations. Furthermore, the ability of GPL-GNN to learn graph representations without the constraints of pre-training and fine-tuning for model consistency increases the flexibility in choosing task-specific GNNs. In addition, the introduction of prototype networks as classification heads enables quick adaptation of GPL-GNNs to downstream tasks. Finally, we conduct extensive experiments on a benchmark dataset to demonstrate the effectiveness of GPL-GNN. The code is available in: https://github.com/chenzihaoww/GPL-GNN.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call