Abstract

As a self-supervised learning method, the graph contrastive learning achieve admirable performance in graph pre-training tasks, and can be fine-tuned for multiple downstream tasks such as protein structure prediction, social recommendation, <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">etc.</i> One prerequisite for graph contrastive learning is the support of huge graphs in the training procedure. However, the graph data nowadays are distributed in various devices and hold by different owners, like those smart devices in Internet of Things. Considering the non-negligible consumptions on computing, storage, communication, data privacy and other issues, these devices often prefer to keep data locally, which significantly reduces the graph contrastive learning performance. In this paper, we propose a novel federal graph contrastive learning framework. First, it is able to update node embeddings during training by means of a federation method, allowing the local GCL to acquire anchors with richer information. Second, we design a Self-adaptive Cluster-based server strategy to select the optimal embedding update scheme, which maximizes the richness of the embedding information while avoiding the interference of noise. Generally, our method can build anchors with richer information through a federated learning approach, thus alleviating the performance degradation of graph contrastive learning due to distributed storage. Extensive analysis and experimental results demonstrate the superiority of our framework.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call