Abstract

Federated Learning (FL) is a distributed learning framework that enables multiple clients to train a global model in a privacy-preserving manner. However, clients are often reluctant to engage in FL training process due to the energy consumption, thus FL calls for an appropriate economic mechanism to attract distributed clients in the practical FL deployment. In this paper, we integrate the incentive mechanism design and client selection problem, and consider accelerating the convergence of FL model by giving certain incentives to the appropriate clients to complete the FL task earlier. We design a time-dependent incentive mechanism to encourage more competent clients to participate in FL, i.e., clients with shorter local training time can receive higher rewards. With such an incentive mechanism, the edge server only needs to focus on the FL completion time to measure the contribution of the clients and does not need to pay attention to the specific resource configuration of the clients. We design a near-optimal greedy-based algorithm to solve the above problem and for both cases of Independent and Identically Distribution (IID) and non-IID of the local client data distributions. Extensive experiments show that our proposed mechanism is effective and can achieve a much faster convergence of the global model compared with the benchmark algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call