Abstract

Federated Learning enables data owners to train an artificial intelligence model collaboratively while keeping all the training data locally, reducing the possibility of personal data breaches. However, the heterogeneity of local resources and dynamic characteristics of federated learning systems bring new challenges hindering the development of federated learning techniques. To this end, we propose an Adaptive Asynchronous Federated Learning scheme with Momentum, called FedAAM, comprising an adaptive weight allocation algorithm and a novel asynchronous federated learning framework. Firstly, we dynamically allocate weights for the global model update using an adaptive weight allocation strategy that can improve the convergence rate of models in asynchronous federated learning systems. Then, targeting the challenges mentioned previously, we proposed two new asynchronous global update rules based on the differentiated strategy, which is an essential component of the proposed novel federated learning framework. Furthermore, our asynchronous federated learning framework introduces the historical global update direction (i.e., global momentum) into the global update operation, aiming at improving training efficiency. Moreover, we prove that the model under the FedAAM scheme can achieve a sublinear convergence rate. Extensive experiments on real-world datasets demonstrate that the FedAAM scheme outperforms representative synchronous and asynchronous federated learning schemes (i.e., FedAvg and FedAsync) regarding the model’s convergence rate and capacity to deal with dynamic systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call