Abstract

Federated learning (FL) is a communication-efficient machine learning paradigm to leverage distributed data at the network edge. Nevertheless, FL usually fails to train a high-quality model from the networks, where the edge nodes collect noisy labeled data. To tackle this challenge, this paper focuses on developing an innovative robust FL. We consider two kinds of networks with different data distribution. Firstly, we design a reweighted FL under a full-data network, where all edge nodes are equipped with both numerous noisy labeled dataset and small clean dataset. The key idea is that edge devices learn to assign the local weights of loss functions in noisy labeled dataset, and cooperate with central server to update global weights. Secondly, we consider a part-data network where some edge nodes exclude clean dataset, and can not compute the weights locally. The broadcasting of the global weights is added to help those edge nodes without clean dataset to reweight their noisy loss functions. Both designs have a convergence rate of <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathcal {O}(1/T^{2})$</tex-math></inline-formula> . Simulation results illustrate that the both proposed training processes improve the prediction accuracy due to the proper weights assignments of noisy loss function.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call