Abstract

Federated Learning (FL) is a machine learning approach in which a cluster of clients collaboratively trains a model without sharing the data of any clients. As the datasets of each client's task are typically noisy and statistically diverse, they are not easily learned by a global model, especially deep networks that are prone to overfitting on biased training data. This study proposes an adaptive sample weighting algorithm based on self-paced learning (SPL). Instead of weighting all samples equally, the algorithm assigns weights to each sample based on its impact on the global model. To achieve this, the client objectives are defined as the sum of the weighted empirical risks and the regularizer with respect to the weight of each sample. The final global model is obtained by alternately optimizing the model parameters and sample weights. By applying the implicit SPL regularizer, we derive an analytic formula for the optimal sample weights used in the experiments. We demonstrate that the algorithm converges and that our method is more stable and accurate than the series of federated averaging algorithms. Specifically, when 30% of the training data is corrupted, the test accuracy in our method is up to 50% higher than that achieved by the federated averaging algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call