Abstract

In this paper, we focus on the privacy preserving mechanism design for crowdsourced Federated Learning (FL), where a requester can outsource its model training task to some workers via an FL platform. A potential way to preserve the privacy of workers' local data is to leverage Differential Privacy (DP) mechanisms on local models. However, most of these studies cannot allow workers to dominate their own privacy protection levels by themselves. Thus, we propose a Personalized Privacy Preserving Mechanism, called P3M, to satisfy the heterogeneous privacy needs of workers, which consists of two parts. The first part includes a personalized privacy budget determination problem. We model it as a two-stage Stackelberg game, derive the personalized privacy budget for each worker and the optimal payment for the requester, and prove that they form a unique Stackelberg equilibrium. Second, we design a dynamic perturbation scheme to perturb model parameters. Through the theoretical analysis, we prove that P3M satisfies the desired DP property, and derive the bounds of the variance of average perturbed parameters and the convergence upper bound. This demonstrates that the global model accuracy can be controllable and P3M is endowed with the satisfactory convergence performance. In addition, we extend our problem to the scenario where the total privacy budget of all workers is limited, so as to prevent some workers from setting exorbitant privacy budgets. Under the privacy constraint, we re-determine the personalized privacy budget for each worker. Finally, exhaustive simulations of P3M are conducted based on real-world datasets, and the experimental results corroborate its effectiveness and practicability.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call