Abstract

The emerging Federated Learning (FL) enables IoT devices to collaboratively learn a shared model based on their local datasets. However, due to end devices’ heterogeneity, it will magnify the inherent synchronization barrier issue of FL and result in non-negligible waiting time when local models are trained with the identical batch size. Moreover, the useless waiting time will further lead to a great strain on devices’ limited battery life. Herein, we aim to alleviate the negative impact of synchronization barrier through adaptive batch size during model training. When using different batch sizes, stability and convergence of the global model should be enforced by assigning appropriate learning rates on different devices. Therefore, we first study the relationship between batch size and learning rate, and formulate a scaling rule to guide the setting of learning rate in terms of batch size. Then we theoretically analyze the convergence rate of global model and obtain a convergence upper bound. On these bases, we propose an efficient algorithm that adaptively adjusts batch size with scaled learning rate for heterogeneous devices to reduce the waiting time and save battery life. We conduct extensive simulations and testbed experiments, and the experimental results demonstrate the effectiveness of our method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call