Abstract
Artificial Intelligence (AI) has attracted a great deal of attention in recent years. However, alongside all its advancements, problems have also emerged, such as privacy violations, security issues and model fairness. Differential privacy, as a promising mathematical model, has several attractive properties that can help solve these problems, making it quite a valuable tool. For this reason, differential privacy has been broadly applied in AI but to date, no study has documented which differential privacy mechanisms can or have been leveraged to overcome its issues or the properties that make this possible. In this paper, we show that differential privacy can do more than just privacy preservation. It can also be used to improve security, stabilize learning, build fair models, and impose composition in selected areas of AI. With a focus on regular machine learning, distributed machine learning, deep learning, and multi-agent systems, the purpose of this article is to deliver a new view on many possibilities for improving AI performance with differential privacy techniques.
Highlights
A RTIFICIAL Intelligence (AI) is one of the most prevalent topics of research today across almost every scientific field
When Dwork et al [4] showed that applying differential privacy mechanisms to test data in machine learning could prevent over-fitting of learning algorithms, it launched a new direction beyond simple privacy preservation to one that solves emerging problems in AI [5]
We have shown that these properties can improve diverse aspects of AI areas, including machine learning, deep learning and multi-agent systems
Summary
A RTIFICIAL Intelligence (AI) is one of the most prevalent topics of research today across almost every scientific field. Many researchers have been exploring new and existing security and privacy tools to tackle these new emerging problems. Differential privacy is a prevalent privacy preservation model which guarantees whether an individual’s information is included in a dataset has little impact on the aggregate output. If we can find a mechanism that can query both datasets and obtain the same outputs, we can claim that differential privacy is satisfied. An adversary cannot associate the query outputs with either of the two neighbouring datasets, so the one different record is safe. When Dwork et al [4] showed that applying differential privacy mechanisms to test data in machine learning could prevent over-fitting of learning algorithms, it launched a new direction beyond simple privacy preservation to one that solves emerging problems in AI [5]. We use two examples to illustrate how those new properties can be applied
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Knowledge and Data Engineering
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.