Abstract

Generative Adversary Network (GAN) is a promising field with many practical applications. By using GANs, generated data can replace real sensitive data to be released for outside productive research. However, sometimes sensitive data is distributed among multiple parties, in which global generators are needed. Additionally, generated samples could remember or reflect sensitive features of real data. In this paper, we propose a scheme to aggregate a global generator from distributed local parties without access to local parties’ sensitive datasets, and the global generator will not reveal sensitive information of local parties’ training data. In our scheme, we separate GAN into two parts: discriminators played by local parties, a global generator played by the global party. Our scheme allows local parties to train different types of discriminators. To prevent generators from stealing sensitive information of real training datasets, we propose noised discriminator loss aggregation, add Gaussian noise to discriminators’ loss, then use the average of noised loss to compute global generator’s gradients and update its parameters. Our scheme is easy to implement by modifying plain GAN structures. We test our scheme on real-world MNIST and Fashion MNIST datasets, experimental results show that our scheme can achieve high-quality global generators without breaching local parties’ training data privacy.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.