Abstract

The exponential growth of big data and deep learning has increased the data exchange traffic in society. Machine Learning as a Service, (MLaaS) which leverages deep learning techniques for predictive analytics to enhance decision-making, has become a hot commodity. However, the adoption of MLaaS introduces data privacy challenges for data owners and security challenges for deep learning model owners. Data owners are concerned about the safety and privacy of their data on MLaaS platforms, while MLaaS platform owners worry that their models could be stolen by adversaries who pose as clients. Consequently, Privacy-Preserving Deep Learning (PPDL) arises as a possible solution to this problem. Recently, several papers about PPDL for MLaaS have been published. However, to the best of our knowledge, no previous paper has summarized the existing literature on PPDL and its specific applicability to the MLaaS environment. In this paper, we present a comprehensive survey of privacy-preserving techniques, starting from classical privacy-preserving techniques to well-known deep learning techniques. Additionally, we present a detailed description of PPDL and address the issue of using PPDL for MLaaS. Furthermore, we undertake detailed comparisons between state-of-the-art PPDL methods. Subsequently, we classify an adversarial model on PPDL by highlighting possible PPDL attacks and their potential solutions. Ultimately, our paper serves as a single point of reference for detailed knowledge on PPDL and its applicability to MLaaS environments for both new and experienced researchers.

Highlights

  • In a business environment, prediction and decision-making are two important processes that require careful consideration

  • In this paper, we have provided a complete review of stateof-the-art Privacy-Preserving Deep Learning (PPDL) on Machine Learning as a Service (MLaaS)

  • Our work addresses the limitation of implementing novel Deep Learning (DL) techniques with PP, including the analysis of the original structure of NN and the modifications needed to use it in privacy-preserving environment

Read more

Summary

INTRODUCTION

Prediction and decision-making are two important processes that require careful consideration. By leveraging the ability of deep learning, we can predict the future and make decisions based on the currently available information, which becomes our training data when we train the DL model. The encrypted data is used as the input of the deep learning model. A. GROUP-BASED ANONYMITY While homomorphic encryption, functional encryption, and secure multi-party computation techniques enable computation on encrypted data without revealing the original plaintext, we need to preserve the privacy of sensitive personal data such as medical and health data. Sometimes, when we train our machine learning model, the classification result will be too good for some kind of data, showing bias based on the training set. GAN learns the process by modeling the distribution of individual classes

LIMITATION OF IMPLEMENTING NOVEL DL TECHNIQUES TO PP
ADVERSARIAL MODEL IN PPDL
COMPARISON METRICS
Findings
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.