Abstract

Deep Neural Network (DNN) has witnessed rapid progress and significant successes in the recent years. Wide range of applications depends on the high performance of deep learning to solve real-life challenges. Deep learning is being applied in many safety-critical environments. However, deep neural networks have been recently found vulnerable to adversarial examples and backdoor attacks. Stealthy adversarial examples and backdoor attacks can easily fool deep neural networks to generate the wrong results. The risk of adversarial examples attacks that target deep learning models impedes the wide deployment of deep neural networks in safety-critical environments. In this work we propose a defensive technique for deep learning by combining activation function and neurons pruning to reduce the effects of adversarial examples and backdoor attacks. We evaluate the efficacy of the method on an anomaly detection application using Deep Belief Network (DBN) and Coupled Generative Adversarial Network (CoGAN). The method reduces the loss of accuracy from the attacks from an average 10% to 2% using DBN and from an average 14% to 2% using CoGAN. We evaluate the method using two benchmark datasets: NSL-KDD and ransomware.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.