Abstract

Federated Learning (FL), as an effective decentral-ized approach, has attracted considerable attention in privacy-preserving applications for wireless edge networks. In practice, edge devices are typically limited by energy, memory, and computation capabilities. In addition, the communications be-tween the central server and edge devices are with constrained resources, e.g., power or bandwidth. In this paper, we propose a joint sparsification and optimization scheme to reduce the energy consumption in local training and data transmission. On the one hand, we introduce sparsification, leading to a large number of zero weights in sparse neural networks, to alleviate devices' computational burden and mitigate the data volume to be uploaded. To handle the non-smoothness incurred by sparsification, we develop an enhanced stochastic gradient descent algorithm to improve the learning performance. On the other hand, we optimize power, bandwidth, and learning parameters to avoid communication congestion and enable an energy-efficient transmission between the central server and edge devices. By collaboratively deploying the above two components, the numerical results show that the overall energy consumption in FL can be significantly reduced, compared to benchmark FL with fully-connected neural networks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call