Abstract

The surging of deep learning brings new vigor and vitality to shape the prospect of intelligent Internet of Things (IoT), and the rise of edge intelligence enables provisioning real-time deep neural network (DNN) inference services for mobile users. To perform efficient and effective DNN model training in edge computing environments while preserving training data security and privacy of IoT devices, federated learning has been envisioned as an ideal learning paradigm for this purpose. In this paper, we study energy-aware DNN model training in edge computing. We first formulate a novel energy-aware, Device-to-Device (D2D) assisted federated learning problem with the aim to minimize the global loss of a training DNN model, subject to bandwidth capacity on an edge server and energy capacity on each IoT device. We then devise a near-optimal learning algorithm for the problem when the training data follows the i.i.d. data distribution. The crux of the proposed algorithm is to explore using the energy of neighboring devices of each device for its local model uploading, by reducing the problem to a series of weighted maximum matching problems in corresponding auxiliary graphs. We also consider the problem without the assumption of the i.i.d. data distribution, for which we propose an efficient heuristic algorithm. We finally evaluate the performance of the proposed algorithms through experimental simulations. Experimental results show that the proposed algorithms are promising.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call