Abstract

Local Stochastic Gradient Descent (SGD) with periodic model averaging (FedAvg) is a foundational algorithm in Federated Learning. The algorithm independently runs SGD on multiple clients and periodically averages the model across all the clients. This periodic model averaging potentially causes a significant model discrepancy across the clients making the global loss converge slowly. While recent advanced optimization methods tackle the issue focused on non-IID settings, there still exists the model discrepancy issue due to the underlying periodic model averaging. We propose a partial model averaging framework that mitigates the model discrepancy issue in Federated Learning. The partial averaging encourages the local models to stay close to each other on parameter space, and it enables to more effectively minimize the global loss. We extensively evaluate the performance of the partial averaging strategy using CIFAR-10/100 and FEMNIST benchmarks. Given a fixed number of training iterations and a large number of clients (128), the partial averaging achieves up to 2.2% higher accuracy than the periodic full averaging.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call