Abstract

Model update compression is a widely used technique to alleviate the communication cost in federated learning (FL). However, there is evidence indicating that the compression-based FL system often suffers the following two issues, i) the implicit learning performance deterioration of the global model due to the inaccurate update, ii) the limitation of sharing the same compression rate over heterogeneous edge devices. In this paper, we propose an energy-efficient learning framework, named Snowball, that enables edge devices to incrementally upload their model updates in a coarse-to-fine compression manner. To this end, we first design a fine-grained compression scheme that enables a nearly continuous compression rate. After that, we investigate the Snowball optimization problem to minimize the energy consumption of parameter transmission with learning performance constraints. By leveraging the theoretical insights of the convergence analysis, the optimization problem is transformed into a tractable form. Following that, a water-filling algorithm is designed to solve the problem, where each device is assigned a personalized compression rate according to the status of the locally available resource. Experiments indicate that, compared to state-of-the-art FL algorithms, our learning framework can save five times the required energy of uplink communication to achieve a good global accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call