Abstract

Deep learning technology is increasingly being applied in safety-critical scenarios but has recently been found to be susceptible to imperceptible adversarial perturbations. This raises a serious concern regarding the adversarial robustness of deep neural network (DNN)–based applications. Accordingly, various adversarial attacks and defense approaches have been proposed. However, current studies implement different types of attacks and defenses with certain assumptions. There is still a lack of full theoretical understanding and interpretation of adversarial examples. Instead of reviewing technical progress in adversarial attacks and defenses, this article presents a framework consisting of three perspectives to discuss recent works focusing on theoretically explaining adversarial examples comprehensively. In each perspective, various hypotheses are further categorized and summarized into several subcategories and introduced systematically. To the best of our knowledge, this study is the first to concentrate on surveying existing research on adversarial examples and adversarial robustness from the interpretability perspective. By drawing on the reviewed literature, this survey characterizes current problems and challenges that need to be addressed and highlights potential future research directions to further investigate adversarial examples.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call