Abstract

Federated learning has been increasingly adopted as an effective means to cope with the significant increase in the volume of training data needed for machine learning and address the privacy concerns in using these data. However, moral hazard may occur when individual data providers (IDPs) use smaller amounts or low-quality data to train their local models and submit these low-quality results (gradients) to free-ride on the benefits of the federated learning. Therefore, federated learning operators often face the dilemma of encouraging more IDPs to participate in data sharing and ensuring truthful contributions from IDPs to obtain high-quality global training results. This article proposes a spontaneous cooperative data-sharing model to address this dilemma. Through an iterated prisoner's dilemma model solved by the zero-determinant (ZD) strategy, we show that the optimal ZD strategies of all IDPs are to maximize their training efforts when participating in federated learning. According to the comparisons with other approaches through simulations, we demonstrate that either the two-IDP with binary strategies case or the multi-IDP with continuous strategies case could result in the optimal individual utility and social welfare. Therefore, the proposed spontaneous cooperative model effectively avoids the existing moral hazard problem in federated learning and provides a viable instrument for the federated learning operator to maximize the performance of the global model without the need to evaluate the quality of local gradients.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call