Abstract

Federated learning (FL) allows multiple edge computing nodes to jointly build a shared learning model without having to transfer their raw data to a centralized server, thus reducing communication overhead. However, FL still faces a number of challenges such as nonindependent and identically distributed data and heterogeneity of user equipments (UEs). Enabling a large number of UEs to join the training process in every round raises a potential issue of the heavy global communication burden. To address these issues, we generalize the current state-of-the-art federated averaging (FedAvg) by adding a weight-based proximal term to the local loss function. The proposed FL algorithm runs stochastic gradient descent in parallel on a sampled subset of the total UEs with replacement during each global round. We provide a convergence upper bound characterizing the tradeoff between convergence rate and global rounds, showing that a small number of active UEs per round still guarantees convergence. Next, we employ the proposed FL algorithm in wireless Internet-of-Things (IoT) networks to minimize either total energy consumption or completion time of FL, where a simple yet efficient path-following algorithm is developed for its solutions. Finally, numerical results on unbalanced data sets are provided to demonstrate the performance improvement and robustness on the convergence rate of the proposed FL algorithm over FedAvg. They also reveal that the proposed algorithm requires much less training time and energy consumption than the FL algorithm with full user participation. These observations advocate the proposed FL algorithm for a paradigm shift in bandwidth-constrained learning wireless IoT networks.

Highlights

  • N OWADAYS, Internet of Things (IoT) and mobile devices are often equipped with advanced sensors and high computing capabilities that allow them to collect and process vast amounts of data generated at the network edge [1]–[4]

  • We have proposed an efficient federated learning (FL) algorithm relying on a weight-based proximal term, which is an extension of federated averaging (FedAvg), to tackle the heterogeneity across user equipments (UEs) data and UEs’ characteristics in federated networks

  • The proposed FL algorithm allows a small number of UEs per round to be participated in the training process based on the unbiased sampling strategy

Read more

Summary

INTRODUCTION

N OWADAYS, Internet of Things (IoT) and mobile devices are often equipped with advanced sensors and high computing capabilities that allow them to collect and process vast amounts of data generated at the network edge [1]–[4]. The extensive amount of data of IoT devices is usually collected in private environments, and it is privacy sensitive in nature It is, generally not practical to send all data to a centralized server/cloud center that trains a deep learning model. UEs compute local updates based on their available data and send their local models back to the server These steps are repeated until a certain level of global model accuracy is achieved. There are still a number of challenges in implementing FL such as nonindependent and identically distributed (non-iid) data across the network and high communication costs due to sending massive local model updates, which will be tackled in this article

Review of Related Literature
Motivation and Main Contributions
Paper Organization and Mathematical Notation
Network Model
Loss Function
Proposed FL Algorithm Design
Convergence Analysis
5: BS sends wg to all UEs in Kg
PROPOSED FL-ENABLED RESOURCE OPTIMIZATION OVER WIRELESS IOT NETWORKS
System Model
Problem Formulation
Proposed Path-Following Algorithm
Numerical Results for the Proposed FL Algorithm 1 Model and Loss Function
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call