Abstract

Deep Learning is the most widely used tool in the contemporary field of computer vision. Its ability to accurately solve complex problems is employed in vision research to learn deep neural models for a variety of tasks, including security critical applications. However, it is now known that deep learning is vulnerable to adversarial attacks that can manipulate its predictions by introducing visually imperceptible perturbations in images and videos. Since the discovery of this phenomenon in 2013, it has attracted significant attention of researchers from multiple sub-fields of machine intelligence. In 2018, we published the first-ever review of the contributions made by the computer vision community in adversarial attacks on deep learning (and their defenses). Many of those contributions have inspired new directions in this area, which has matured significantly since witnessing the first generation methods. Hence, as a legacy sequel of our first literature survey, this review article focuses on the advances in this area since 2018. We thoroughly discuss the first generation attacks and comprehensively cover the modern attacks and their defenses appearing in the prestigious sources of computer vision and machine learning research. Besides offering the most comprehensive literature review of adversarial attacks and defenses to date, the article also provides concise definitions of technical terminologies for the non-experts. Finally, it discusses challenges and future outlook of this direction based on the literature since the advent of this research direction.

Highlights

  • D EEP LEARNING (DL) [1] is a data driven technology that can precisely model complex mathematical functions over large data sets

  • Since [7], the computer vision community has contributed significantly to deep learning research, which has led to increasingly powerful neural networks [10], [11], [12] that can handle a large number of layers in their architectures - establishing the essence of ‘deep’ learning

  • Since its advent in 2013, this direction has intrigued the computer vision community, which has led to a large influx of papers in the recent years

Read more

Summary

INTRODUCTION

D EEP LEARNING (DL) [1] is a data driven technology that can precisely model complex mathematical functions over large data sets. The literature has witnessed pre-computed perturbations, known as universal perturbations, that can be added to ‘any’ image to fool a given model with high probability [33], [34] These facts have profound implications for security critical applications, especially when it is widely believed that deep learning solutions have predictive prowess that can surpass human abilities [13], [35]. In [29], we surveyed the contributions surfaced in this direction until the advent of 2018 Most of those works can be seen as the first-generation techniques that explore the core algorithms and techniques to fool deep learning or defend it against the adversarial attacks.

DEFINITION OF TERMS
ADVERSARIAL ATTACKS
RECENT ATTACKS ON CLASSIFIERS
MISCELLANEOUS ATTACKS
ATTACKS BEYOND CLASSIFICATION
ON THE EXISTENCE OF ADVERSARIAL EXAMPLES
ON INPUT-SPECIFIC PERTURBATIONS
DEFENSE AGAINST ADVERSARIAL ATTACKS
DETECTION FOR DEFENSE
Findings
DISCUSSION
CONCLUSION

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.