Abstract

Deep learning has become state-of-the-art in real-life applications. However, current studies show that deep learning models are susceptible to adversarial attacks. Adversarial attacks are well-crafted perturbed inputs that fool the deep learning models. An adversarial attack can easily fool the classifier thus posing a threat for deep learning models while deploying them in real-world applications. Our work explores various adversarial attacks and defenses against adversaries available in the literature. We find that existing defense strategies show good results on greyscale images like MNIST and FMNIST but, the robustness of the same defense models radically decreases on RGB images like the CIFAR10 dataset. Also, the robustness of a model greatly depends on the type of adversarial examples on which the model is trained for achieving robustness. We devise a defense technique based on adversarial training, called Hybrid Adversarial Training (HAT). During training, we augment HAT with state-of-art adversarial examples crafted by combining DeepFool and FGSM attack hence increasing the robustness of deep learning models in the stipulated amount of time against a variety of attacks. Empirically performance of HAT is evaluated on cutting-edge adversarial attacks using various benchmark datasets. Our model shows good performance in terms of robustness and time than existing defense models. Our defense model can withstand the strong adversarial attack on the CIFAR10, a benchmark RGB image dataset. HAT outperforms the existing models as our model shows 15% more robustness than existing defenses. HAT is also proficient to maintain the natural accuracy of classifiers.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.