Abstract

Recently, machine learning has been frequently applied in multiple areas, like image classification and object recognition. However, some machine learning applications train their models on sensitive data that could leak personal information, leading to potential privacy issues. Many privacy attacks against deep learning models have been proposed. A membership inference attack was demonstrated to be a type of effective attack and a real threat to privacy in machine learning. Here, we trained three different convolutional neural networks for image classification with the CIFAR10 dataset and examined the effects of a membership inference attack on these models. We also explore different defense strategies, including differential privacy against membership inference attack. We showed that membership inference attack was more effective against overfitting models and defense strategies like differential privacy and regularization can significantly lower the attack accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call