Abstract

Federated learning is a powerful distributed machine learning paradigm for feature aggregation and learning from multiple heterogenous edge devices, due to its ability to keep the privacy of data. However, the training is inefficient for heterogenous devices with considerable communication. Progressive learning is a promising approach for improving the efficiency. Since progressive learning partitions the training process into multiple stages, it is necessary to determine the number of rounds for each stage, and balance the trade-off between saving the energy and improving the model accuracy. Through pilot experiments, we find that the profile which reflects the relationship between round allocation and model quality remains similar in different hyper-parameter configurations, and also observe that the model quality is lossless if the complete model gets sufficient training. Based on the phenomena, we formulate an optimization problem which minimizes the energy consumption of all devices, under the constraint of model quality. We then design a polynomial-time algorithm for the problem. Experimental results demonstrate the superiority of our proposed algorithm under various settings.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call