Abstract

As edge computing faces increasingly severe data security and privacy issues of edge devices, a framework called federated edge learning (FEL) has recently been proposed to enable machine learning (ML) model training at the edge, ensuring communication efficiency and data privacy protection for edge devices. In this paradigm, the training efficiency has long been challenged by the heterogeneity of communication conditions, computing capabilities, and available data sets at devices. Currently, researchers focus on solving this challenge via device selection from the perspective of optimizing energy consumption or convergence speed. However, the consideration of any one of them is insufficient to guarantee the long-term system efficiency and stability. To fill the gap, we propose an optimization problem to simultaneously minimize the total energy consumption of selected devices and maximize the convergence speed of the global model for device selection in FEL, under the constraints of training data amount and time consumption. For the accurate calculation of energy consumption, we deploy online bandit learning to estimate the CPU-cycle frequency availability of each device, based on an efficient algorithm, named fast-convergent energy-efficient device selection (FCE2DS), is proposed to solve the optimization problem with a low level of time complexity. Through a series of comparative experiments, we evaluate the performance of the proposed FCE2DS scheme, verifying its high training accuracy and energy efficiency.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call