Abstract

Image classification, facial recognition, health care and graph data analysis are just a few of the areas where machine learning (ML) models were widely applied. Recent research has revealed that ML models are subject to membership inference attacks (MIAs), which attempt to determine whether or not a data record was used to train a target model. The success of MIAs is due to the data leakage during the training phase. Differential Privacy (DP) has been applied in ML to restrict inference of training samples. That’s why in our experimentation, we try to mitigate the impact of MIAs using Differentially Private Stochastic Gradient Descent (DP-SGD) algorithm based on modifying the minibatch stochastic optimization process and adding noise in order to provide a way to get useful information about data without revealing much about any individual information. In this paper, we evaluate DP-SGD as a countermeasure against MIAs on Conventional Neural Networks (CNN) training over MNIST dataset. We consider different combinations of DP-SGD’s noise multiplier and clipping norm parameters in our evaluation. Through experimental analysis, we show that this defense strategy can mitigate the impact of MIAs on the target model while guaranteeing the accuracy of the target model. Evaluation results reveal that our suggested solution for safeguarding privacy against MIA is successful.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call