Abstract

Stochastic Gradient Langevin Dynamics (SGLD) is believed to preserve differential privacy as its intrinsic attribute since it obtain randomness from posterior sampling and natural noise. In this paper, we propose Differentially Private General Stochastic Gradient Langevin Dynamics (DP-GSGLD), a novel variant of SGLD which realizes gradient estimation in parameter updating through Bayesian sampling. We introduce the technique of parameter clipping and prove that DP-GSGLD satisfies the property of Differential Privacy (DP). We conduct experiments on several image datasets for defending against gradient attack that is commonly appeared in the scenario of federated learning. The results demonstrate that DP-GSGLD can decrease the time for model training and achieve higher accuracy under the same privacy level.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call