Abstract

Federated learning provides a promising paradigm to enable network edge intelligence in the future sixth generation (6G) systems. However, due to the high dynamics of wireless circumstances and user behavior, the collected training data is non-independent and identically distributed (non-IID), which causes severe performance degradation of federated learning. To solve this problem, federated learning with non-IID data in wireless networks is studied in this paper. Firstly, based on the derived upper bound of expected weight divergence, a federated averaging scheme is proposed to reduce the distribution divergence of non-IID data. Secondly, to further harmonize the distribution divergence, data sharing is associated with federated learning in wireless networks, and a joint optimization algorithm is designed to keep a sophisticated balance between the model accuracy and the cost. Finally, the simulation results based on a common-used image data set are provided to evaluate the performance of our proposed schemes, which can achieve significant performance gains with a small price of latency and energy consumption.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call