Abstract

Federated learning (FL) serves as a data privacy-preserved machine learning paradigm, and realizes the collaborative model trained by distributed clients. To accomplish an FL task, the task publisher needs to pay financial incentives to the FL server offloads the task to the contributing FL clients. However, it is challenging to design proper incentives for FL clients because the task is privately trained by the clients. This paper proposes a contract theory-based FL task training model toward minimizing the incentive budget subject to clients being individually rational and incentive compatible in each FL training round. We design a two-dimensional contract model by formally defining two private types of clients, namely data quality and computation effort. To effectively aggregate the trained models, a contract-based aggregator is proposed. We analyze the feasible and optimal contract solutions for the proposed contract model. The experimental results demonstrate that the proposed framework and contract model can effectively improve the generation accuracy of FL tasks. Moreover, the generalization accuracy of the FL tasks can be improved by the proposed incentive mechanism, the contract-based aggregation is applied.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call