Abstract

In this paper, the optimization of network performance to support the deployment of federated learning (FL) is investigated. In particular, in the considered model, each user owns a machine learning (ML) model by training through its own dataset, and then transmits its ML parameters to a base station (BS) which aggregates the ML parameters to obtain a global ML model and transmits it to each user. Due to limited radio frequency (RF) resources, the number of users that participate in FL is restricted. Meanwhile, each user uploading and downloading the FL parameters may increase communication costs thus reducing the number of participating users. To this end, we propose to introduce visible light communication (VLC) as a supplement to RF and use compression methods to reduce the resources needed to transmit FL parameters over wireless links so as to further improve the communication efficiency and simultaneously optimize wireless network through user selection and resource allocation. This user selection and bandwidth allocation problem is formulated as an optimization problem whose goal is to minimize the training loss of FL. We first use a model compression method to reduce the size of FL model parameters that are transmitted over wireless links. Then, the optimization problem is separated into two subproblems. The first subproblem is a user selection problem with a given bandwidth allocation, which is solved by a traversal algorithm. The second subproblem is a bandwidth allocation problem with a given user selection, which is solved by a numerical method. The ultimate user selection and bandwidth allocation are obtained by iteratively compressing the model and solving these two subproblems. Simulation results show that the proposed FL algorithm can improve the accuracy of object recognition by up to 16.7% and improve the number of selected users by up to 68.7%, compared to a conventional FL algorithm using only RF.

Highlights

  • Licensee MDPI, Basel, Switzerland.Federated learning (FL), which allows edge devices to cooperatively train a shared machine learning model without transmitting private data, is an emerging distributed machine learning technique [1,2]

  • The first subproblem is a user selection problem with a given bandwidth allocation, which is solved by a traversal algorithm

  • The second subproblem is a bandwidth allocation problem with a given user selection, which is solved by a numerical method

Read more

Summary

Introduction

Federated learning (FL), which allows edge devices to cooperatively train a shared machine learning model without transmitting private data, is an emerging distributed machine learning technique [1,2]. The FL training process needs to iteratively transmit machine learning parameters over wireless links. Due to dynamic wireless channels and imperfect wireless transmission, the performance of FL will be significantly affected by wireless communication. Due to limited communication resources, the number of users that can participate in FL is limited.

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call