Abstract

The success of machine learning (ML) depends on the availability of large-scale datasets. However, recent studies have shown that models trained on such datasets are vulnerable to privacy attacks, among which membership inference attack (MIA) brings serious privacy risk. MIA allows an adversary to infer whether a sample belongs to the training dataset of the target model or not. Though a variety of defenses against MIA have been proposed such as differential privacy and adversarial regularization, they also result in lower model accuracy and thus make the models less unusable. In this paper, aiming at maintaining the accuracy while protecting the privacy against MIA, we propose a new defense against membership inference attacks by generative adversarial network (GAN). Specifically, sensitive data is used to train a GAN, then the GAN generate the data for training the actual model. To ensure that the model trained with GAN on small datasets can has high utility, two different GAN structures with special training techniques are utilized to deal with the image data and table data, respectively. Experiment results show that the defense is more effective on different data sets against the existing attack schemes, and is more efficient compared with most advanced MIA defenses.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call