Abstract
Deep learning (DL) has been shown to be very effective for many application domains of machine learning (ML), including image classification, voice recognition, natural language processing, and bioinformatics. The success of DL techniques is directly related to the availability of large amounts of training data. However, in many cases, the data are sensitive to the users and should be protected to preserve the privacy. Privacy-preserving deep learning (PPDL) has thus become a very active research field to ensure the training process and use of DL models are productive without exposing or leaking information about the data.This paper aims to provide a comprehensive survey of PPDL. We concentrate on the risks that affect data privacy in DL and conduct a detailed investigation into the models that ensure privacy. Finally, we propose a set of evaluation criteria, detailing the advantages and disadvantages of the solutions. Based on the analyzed strengths and weaknesses, the paper has highlighted some important research problems and application cases that have not been studied and these point to certain open research directions.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.