Abstract
Federated Learning (FL) is a machine learning approach in which a cluster of clients collaboratively trains a model without sharing the data of any clients. As the datasets of each client's task are typically noisy and statistically diverse, they are not easily learned by a global model, especially deep networks that are prone to overfitting on biased training data. This study proposes an adaptive sample weighting algorithm based on self-paced learning (SPL). Instead of weighting all samples equally, the algorithm assigns weights to each sample based on its impact on the global model. To achieve this, the client objectives are defined as the sum of the weighted empirical risks and the regularizer with respect to the weight of each sample. The final global model is obtained by alternately optimizing the model parameters and sample weights. By applying the implicit SPL regularizer, we derive an analytic formula for the optimal sample weights used in the experiments. We demonstrate that the algorithm converges and that our method is more stable and accurate than the series of federated averaging algorithms. Specifically, when 30% of the training data is corrupted, the test accuracy in our method is up to 50% higher than that achieved by the federated averaging algorithm.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.