Generative adversarial networks (GANs) have become hugely popular by virtue of their impressive ability to generate realistic samples. Although GANs alleviate the arduous data-collection problem, they are prone to memorize training samples as a result of their complex model structure. Thus, GANs may not provide sufficient privacy guarantees, and there is a considerable chance of inadvertently divulging data privacy. To alleviate this issue, we design a privacy-enhanced GAN based on differential privacy. We first integrate truncated concentrated differential privacy technique into GAN for mitigating privacy leakage with tighter privacy bound. Then, according to different privacy demands of users in real-world scenarios, we design two adaptive noise allocation strategies, which enable us to dynamically inject noise into gradients at each iteration. Different strategies provide us with an intuitive handle to adopt a suitable strategy and achieve an elegant compromise between privacy and utility in distinct scenarios. Furthermore, we offer rigorous illustrations from the perspective of privacy preservation and privacy defense to demonstrate that our algorithm can fulfill differential privacy guarantees. Extensive experiments on real-world datasets manifest that our algorithm can generate high-quality samples while achieving an excellent trade-off between model performance and privacy guarantees.
Read full abstract