Abstract

Artificial Intelligence (AI) has attracted a great deal of attention in recent years. However, alongside all its advancements, problems have also emerged, such as privacy violations, security issues and model fairness. Differential privacy, as a promising mathematical model, has several attractive properties that can help solve these problems, making it quite a valuable tool. For this reason, differential privacy has been broadly applied in AI but to date, no study has documented which differential privacy mechanisms can or have been leveraged to overcome its issues or the properties that make this possible. In this paper, we show that differential privacy can do more than just privacy preservation. It can also be used to improve security, stabilize learning, build fair models, and impose composition in selected areas of AI. With a focus on regular machine learning, distributed machine learning, deep learning, and multi-agent systems, the purpose of this article is to deliver a new view on many possibilities for improving AI performance with differential privacy techniques.

Highlights

  • A RTIFICIAL Intelligence (AI) is one of the most prevalent topics of research today across almost every scientific field

  • When Dwork et al [4] showed that applying differential privacy mechanisms to test data in machine learning could prevent over-fitting of learning algorithms, it launched a new direction beyond simple privacy preservation to one that solves emerging problems in AI [5]

  • We have shown that these properties can improve diverse aspects of AI areas, including machine learning, deep learning and multi-agent systems

Read more

Summary

INTRODUCTION

A RTIFICIAL Intelligence (AI) is one of the most prevalent topics of research today across almost every scientific field. Many researchers have been exploring new and existing security and privacy tools to tackle these new emerging problems. Differential privacy is a prevalent privacy preservation model which guarantees whether an individual’s information is included in a dataset has little impact on the aggregate output. If we can find a mechanism that can query both datasets and obtain the same outputs, we can claim that differential privacy is satisfied. An adversary cannot associate the query outputs with either of the two neighbouring datasets, so the one different record is safe. When Dwork et al [4] showed that applying differential privacy mechanisms to test data in machine learning could prevent over-fitting of learning algorithms, it launched a new direction beyond simple privacy preservation to one that solves emerging problems in AI [5]. We use two examples to illustrate how those new properties can be applied

Examples
Differential privacy in AI areas
Differential privacy
Randomization
Gaussian mechanism
Exponential mechanism
Private machine learning
Composition
The overview of stability of learning
Differential privacy in learning stability
Summary of stability of learning
An overview of the fairness in learning
Applying differential privacy to improve fairness
Summary of differential privacy in fairness
Summary of differential privacy in machine learning
DIFFERENTIAL PRIVACY IN DEEP LEARNING
Privacy attacks in the deep neural networks
Differential privacy in deep neural networks
Overview of distributed deep learning
Differential privacy in distributed deep learning
Summary of differential privacy in distributed deep learning
Overview of federated learning
Applying differential privacy in federated learning
Summary of differential privacy in federated learning
Summary of differential privacy in deep learning
Differential privacy in multi-agent reinforcement learning
Summary of differential privacy in reinforcement learning
Differential privacy to improve the security of the reinforcement learning
Differential privacy in auction
Summary of differential privacy in auctions
Differential privacy in game theory
Differential privacy to improve the performance
Applying differential privacy to preserve the privacy
Summary of differential privacy in game theory
Summary of multi-agent systems
Private transfer learning
Deep reinforcement learning
Meta-learning
Multi-agent advising learning
Multi-agent transfer learning
Multi-agent reasoning
CONCLUSION

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.