Abstract
Deep Learning (DL) algorithms based on artificial neural networks have achieved remarkable success and are being extensively applied in a variety of application domains, ranging from image classification, automatic driving, natural language processing to medical diagnosis, credit risk assessment, intrusion detection. However, the privacy and security issues of DL have been revealed that the DL model can be stolen or reverse engineered, sensitive training data can be inferred, even a recognizable face image of the victim can be recovered. Besides, the recent works have found that the DL model is vulnerable to adversarial examples perturbed by imperceptible noised, which can lead the DL model to predict wrongly with high confidence. In this paper, we first briefly introduces the four types of attacks and privacy-preserving techniques in DL. We then review and summarize the attack and defense methods associated with DL privacy and security in recent years. To demonstrate that security threats really exist in the real world, we also reviewed the adversarial attacks under the physical condition. Finally, we discuss current challenges and open problems regarding privacy and security issues in DL.
Highlights
Internet of Things (IoT) is a network of physical devices embedded with sensors, software, and connectivity that can communicate over the network with other interconnected devices
Recent researches showed that the adversary can duplicate the parameters/hyparameters of the model deployed in the cloud to provide Machine Learning as a Service (MLaaS)
SUMMARY Deep Learning (DL) has been extensively applied in a variety of application domains such as speech recognition, medical diagnosis, but the recent security and privacy issues of DL have raised concerns of the researcher
Summary
Internet of Things (IoT) is a network of physical devices embedded with sensors, software, and connectivity that can communicate over the network with other interconnected devices. CryptoNets, that can perform inference on encrypted data, which utilizes leveled HE scheme proposed by Bos et al [72] to perform privacy-preserving inference on a pre-trained Convolutional Neural Network (CNN) model.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.