Abstract

In the federated learning (FL) paradigm, edge devices use local datasets to participate in machine learning model training, and servers are responsible for aggregating and maintaining public models. FL cannot only solve the bandwidth limitation problem of centralized training, but also protect data privacy. However, it is difficult for heterogeneous edge devices to obtain optimal learning performance due to limited computing and communication resources. Specifically, in each round of the global aggregation process by the FL, clients in a ‘strong group’ have a greater chance to contribute their own local training results, while those clients in a ‘weak group’ have a low opportunity to participate, resulting in a negative impact on the final training result. In this paper, we consider a federated learning multi-client selection (FL-MCS) problem, which is an NP-hard problem. To find the optimal solution, we model the FL global aggregation process for clients participation as a potential game. In this game, each client will selfishly decide whether to participate in the FL global aggregation process based on its efforts and rewards. By the potential game, we prove that the competition among clients eventually reaches a stationary state, i.e. the Nash equilibrium point. We also design a distributed heuristic FL multi-client selection algorithm to achieve the maximum reward for the client in a finite number of iterations. Extensive numerical experiments prove the effectiveness of the algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call