Abstract

Deep learning models are often trained on datasets that contain sensitive information such as individuals' shopping transactions, personal contacts, and medical records. An increasingly important line of work therefore has sought to train neural networks subject to privacy constraints that are specified by differential privacy or its divergence-based relaxations. These privacy definitions, however, have weaknesses in handling certain important primitives (composition and subsampling), thereby giving loose or complicated privacy analyses of training neural networks. In this paper, we consider a recently proposed privacy definition termed f-differential privacy [18] for a refined privacy analysis of training neural networks. Leveraging the appealing properties of f-differential privacy in handling composition and subsampling, this paper derives analytically tractable expressions for the privacy guarantees of both stochastic gradient descent and Adam used in training deep neural networks, without the need of developing sophisticated techniques as [3] did. Our results demonstrate that the f-differential privacy framework allows for a new privacy analysis that improves on the prior analysis [3], which in turn suggests tuning certain parameters of neural networks for a better prediction accuracy without violating the privacy budget. These theoretically derived improvements are confirmed by our experiments in a range of tasks in image classification, text classification, and recommender systems. Python code to calculate the privacy cost for these experiments is publicly available in the TensorFlow Privacy library.

Highlights

  • In many applications of machine learning, the data sets contain sensitive information about individuals such as location, personal contacts, media consumption, and medical records

  • Our results demonstrate that the f -differential privacy framework allows for a new privacy analysis that improves on the prior analysis Abadi et al (2016), which in turn suggests tuning certain parameters of neural networks for a better prediction accuracy without violating the privacy budget

  • We use NoisySGD and NoisyAdam to train private deep learning models on data sets for tasks ranging from image classification (MNIST), text classification (IMDb movie review), recommender systems (MovieLens movie rating), to regular binary classification (Adult income)

Read more

Summary

Introduction

In many applications of machine learning, the data sets contain sensitive information about individuals such as location, personal contacts, media consumption, and medical records. The f -DP approach gives stronger privacy guarantees than the earlier approach by Abadi et al (2016), even in terms of (ε, δ)-DP This improvement is due to the use of the central limit theorem for f -DP, which accurately captures the privacy loss incurred at each iteration in training the deep learning models. Leveraging the stronger privacy guarantees provided by f -DP, we can trade a certain amount of privacy for an improvement in prediction performance This can be realized, for example, by appropriately reducing the amount of noise added during the training process of neural networks so as to match the target privacy level in terms of (ε, δ)-DP.

Related Work
Preliminaries
Properties of f -Differential Privacy
NoisySGD and NoisyAdam
Comparisons With the Moments Accountant
Results
The f -DP Perspective
Discussion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.