Abstract

Differentially private generative adversary network (GAN) is a very promising field in data privacy, with many practical real-world applications. The idea of differentially private GAN is to provide differential privacy protection for sensitive training datasets. By using differentially private GANs, training datasets can be protected from being remembered or encoded into GAN’s parameters or generated data. Therefore, generated data can be safely used for data augmentation or replacing real sensitive data with very little privacy loss. However, existing methods for differentially private GANs are notoriously inefficient, most of them use a modified Tensorflow library, i.e., Tensorflow Privacy provided by Google, which is not only hard for programming but also very time-consuming even on TPU. In this paper, we provide a simpler and more efficient way to achieve differentially private GANs: when we set the training process of a GAN under a certain manner, we can use discriminator’s loss as vehicle to achieve differential privacy. Compared with existing methods, our method needs no modification on Tensorflow core functions and will not significantly drag down the training process. We test our method on GANs for real-world datasets MNIST, Fashion MNIST, and SVHN datasets, experimental results show that our method is easy to implement, more practical, and more efficient than existing methods. With these advantages, we believe our method for differentially private GANs will be widely used in the very near feature.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call