Unsupervised multi-source domain transfer in federated scenario has become an emerging research direction, which can help unlabeled target domain to obtain the adapted model through source domains under privacy-preserving. However, when local data is graph, the difference of domains (or data heterogeneity) mainly originate from the difference in node attributes and subgraph structures, leading to serious model drift, which is not considered by the existing related algorithms. Currently, there are two challenges in this scenario: (1) The node representations extracted directly through conventional GNNs lack inter-domain generalized and consistent information, making it difficult to apply existing federated learning algorithms. (2) The knowledge of source domains has quality differences, which may lead to negative transfer. To address these issues, we propose a novel two-phase Federated Graph Transfer Learning (FGTL) framework. In the generalization phase, FGTL utilizes local contrastive learning and global context embedding to force node representations to capture the inter-domain generalized and consistent information, lightly alleviating model drift. In the transfer phase, FGTL utilizes consensus knowledge to force the decision bound of classifier to adapt to the target client. In addition, FGTL+ exploits model grouping to make consensus knowledge generation more efficient, further enhancing the scalability of FGTL. Extensive experiments show that FGTL significantly outperforms state-of-the-art related methods, while FGTL+ further enhances privacy protection and reduces both communication and computation overhead.
Read full abstract