Abstract

Federated learning is a commonly distributed framework for large-scale learning, where a model is learned over massively distributed remote devices without sharing information on devices. It has at least three key challenges: heterogeneity in federated networks, privacy and communication costs. In this paper, we propose three federated learning algorithms to handle these issues gradually. First, we introduce a FedSfDane algorithm (DANE with Shrinkage factor for Federated learning), which improves the inexact approximation of the full gradient, captures statistical heterogeneity and restrains systems heterogeneity across the devices. For avoiding possible privacy leakage in federated learning, a Privacy-preserving FedSfDane algorithm (PFedSfDane) is proposed, which is resistant to adversary attacks. Further, we give a novel Communication-efficient PFedSfDane (CPFedSfDane) algorithm for large-scale federated networks, which effectively handles the above three challenges. We give convergence guarantees for the three algorithms to convex and non-convex learning problems. Numerical experiments illustrate our algorithms outperform FedDANE, FedAvg and FedProx algorithms, especially for highly heterogeneous federated networks. CPFedSfDane improves the prediction accuracy of the state-of-the-art FedDANE algorithm by about 15.0% on sent140 dataset, and has high privacy protection and communication efficiency.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call