Abstract

A key feature of federated learning (FL) is that not all clients participate in every communication epoch of each global model update. The rationality for such partial client selection is largely to reduce the communication overhead. However, in many cases, the unselected clients are still able to compute their local model updates, but are not “authorized” to upload the updates in this round, which is a waste of computation capacity. In this work, we propose an algorithm FedUmf—Federated Learning with Unauthorized Model Fusion that utilizes the model updates from the unselected clients. More specifically, a client computes the stochastic gradient descent (SGD) even if it is not selected to upload in the current communication epoch. Then, if this client is selected in the next round, it non-trivially merges the outdated SGD stored in the previous round with the current global model before it starts to compute the new local model. A rigorous convergence analysis is established for FedUmf, which shows a faster convergence rate than the vanilla FedAvg. Comprehensive numerical experiments on several standard classification tasks demonstrate its advantages, which corroborate the theoretical results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call