Abstract

Deep learning-based models are vulnerable to adversarial examples crafted with different adversarial attack techniques. Numerous attack methods have been proposed that utilize gradient information of deep model to craft an adversarial examples. Amongst the existing defense mechanisms, adversarial training has gained considerable attention in building robust deep models that remain effective against different adversarial attacks. However, adversarial training demands high computational cost during the development of a robust deep model. In this paper, we present a simple yet effective defense mechanism against adversarial attacks. The proposed defense mechanism uses the concept of bit plane slicing for de-noising of an input image. The efficacy of the proposed defense technique has been evaluated on two benchmark image datasets, viz. MNIST and Fashion-MNIST datasets. The experiments and results show that the proposed defence technique yields comparable and competitive performance to state-of-the-art defense techniques against adversarial attacks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call