Abstract

Nowadays, deep learning has many applications in our daily life such as self-driving, product recommendation, advertisements and healthcare. In the training phase, deep learning models are trained on dataset of which the information privacy is stored locally through model parameters. The information privacy can in some cases be inferred from the parameters of model, and then traced back to uncover sensitive information. The privacy challenges are solved with many different anonymization methods such as k-anonymity, l-diversity, and t-closeness, which may not be sufficient anymore with unstructured data and inference attack. However, we argue that this problem can be solved by differential privacy. Differential privacy provides a mathematical framework that can be used to understand the extent to which a deep learning algorithm remembers information about individuals and be able to evaluate deep learning for privacy guarantees. In this paper, we review the threats and defenses on privacy models in deep learning, especially the differential privacy. We classify threats and defenses, and identify the points in deep learning to add random noises to input samples, gradient or function to protect privacy model.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.