Abstract

Federated Learning (FL) enables training a global model without sharing the decentralized raw data stored on multiple devices to protect data privacy. In order to enhance the robustness of federated learning, it is essential to provide corresponding returns to users to stimulate high quality users participating in the learning process as long as possible. Available incentive mechanisms for federated learning did not take unreliable and malicious users into account. As a result, these schemes will not only pay high quality participators, but also make malicious and unreliable participators get benefits. In this paper, we propose an incentive mechanism for horizontal federated learning system. In order to motivate high-quality users to participate in federated learning for a long time, we adopt the principle of compound interest to design the incentive strategy. Meanwhile, with our incentive mechanism, the malicious user will not obtain more in return than it pays, so that this kind of users can be inhibited from participating in learning. Experimental results demonstrate the effectiveness of the proposed scheme. Without exceeding the task publisher’s budget, the higher the accuracy of the local model a user provided, and the longer its participation time, the higher its yield. In addition, with the proposed incentive mechanism, the participation rate of malicious users decreased by 65%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call