Abstract

Distributed processing and analysis of large-scale graph data remain challenging because of the high-level discrepancy among graphs. This study investigates a novel subproblem: the distributed multi-task learning on the graph, which jointly learns multiple analysis tasks from decentralized graphs. We propose a federated multi-task graph learning (FMTGL) framework to solve the problem within a privacy-preserving and scalable scheme. Its core is an innovative data-fusion mechanism and a low-latency distributed optimization method. The former captures multi-source data relatedness and generates universal task representation for local task analysis. The latter enables the quick update of our framework with gradients sparsification and tree-based aggregation. As a theoretical result, the proposed optimization method has a convergence rate interpolates between \( \mathcal {O}(1/T) \) and \( \mathcal {O}(1/\sqrt {T}) \) , up to logarithmic terms. Unlike previous studies, our work analyzes the convergence behavior with adaptive stepsize selection and non-convex assumption. Experimental results on three graph datasets verify the effectiveness and scalability of FMTGL.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call