Abstract

Recently, adversarial attacks have drawn the community’s attention as an effective tool to degrade the accuracy of neural networks. However, their actual usage in the world is limited. The main reason is that real-world machine learning systems, such as content filters or face detectors, often consist of multiple neural networks, each performing an individual task. To attack such a system, adversarial example has to pass through many distinct networks at once, which is the major challenge addressed by this paper. In this paper, we investigate multitask adversarial attacks as a threat for real-world machine learning solutions. We provide a novel black-box adversarial attack, which significantly outperforms the current state-of-the-art methods, such as Fast Gradient Sign Attack (FGSM) and Basic Iterative Method (BIM, also known as Iterative-FGSM) in the multitask setting.

Highlights

  • Deep neural networks (DNN) have reached outstanding accuracy in many tasks related to computer vision

  • We propose a new method for multitask adversarial attacks based on dispersion amplification of the inner DNN activations

  • We have shown experimentally that while dispersion reduction seems to be an intuitive choice as it reduces contrast on the inner layers, amplification of the dispersion produces the same effect on the DNN accuracy

Read more

Summary

Introduction

Deep neural networks (DNN) have reached outstanding accuracy in many tasks related to computer vision. This led to their adaptation to safety-critical systems, such as autonomous driving and medical scanning. Despite of their effectiveness, many security flaws [1] in machine learning systems were found recently, including vulnerability to adversarial attacks [2]. Two different threat models are considered in adversarial research: white-box and black-box. While white-box setting is of special interest for the researchers (due to fully controlled environment), from cyber security perspective, black-box attacks are more meaningful.

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.