Abstract

Collaborative deep learning can provide high learning accuracy even participanted users' datasets are small. In the training process, users only share their locally obtained parameters, therefore it is believed that the privacy of users' original datasets can be protected. However, we present an attack approach against users' privacy in collaborative deep learning by utilizing Generative Adversarial Network (GAN) and Membership Inference. In this attack, an attacker builds a discriminator based on users' shared parameters and then trains a GAN network locally. The GAN can refactor the training records of the collaborative deep learning system. According to the generated records, the attacker uses the extent of model overfitting on an input and gets the membership of each group of records by the simplified Membership Inference attack. We evaluate the presented attack model over datasets of complex representations of handwritten digits (MINIST) and face images (CelebA). The results show that an attacker can easily generate the original training sets and classify them to obtain the membership between users' records and their identities in the collaborative deep learning.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call