Abstract

Differential privacy (DP) is a widely used technique for enhancing privacy in federated learning (FL) frameworks, whereby noise is added to the datasets or learning parameters to prevent attackers from discovering which client (i.e., the data owner) the sensitive information originated from. More noise implies higher privacy protection but also leads to a decrease in model accuracy. The privacy budget is a key parameter that controls the amount of noise. Most existing methods focus on the same privacy budget, ignoring the various privacy protection requirements of the clients. In this study, we propose PT-ADP, which is a personalized privacy-preserving federated learning scheme. First, a privacy transaction mechanism (PT) is proposed to realize personalized allocation of privacy budget for clients through “quotation-allocation.” Subsequently, an adaptive transmission data perturbation mechanism (ADP) is proposed to save a large amount of privacy budget and improve accuracy under the premise of providing sufficient privacy protection. The security, convergence, efficiency, and specific parameters of the proposed scheme were theoretically analyzed and verified through extensive experiments. PT-ADP can improve model availability while providing the same level of privacy protection compared to other schemes, without increasing complexity and communication overhead.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call