Abstract

With the development of artificial intelligence, deep neural networks (DNNs) have been widely used, and the ability to solve some complex problems even exceeds that of humans. However, recent research shows that deep neural networks face multiple security threats. By adding some noise to the input data, the attacker can make a well-performing deep neural network make wrong decisions, or even make the model produce the same recognition results for completely different input data. However, it is difficult for the human eye to distinguish the difference between the samples before and after the perturbation is added, so the adversarial samples have strong concealment. Such attacks are called adversarial attacks, and the carefully constructed data used to fool deep neural networks are called adversarial examples. Most of the existing researches on adversarial attacks are aimed at image classification problems, and few works have shifted their attention to object detectors. Object detection is the basis of many computer vision tasks and has been applied in many life-related applications, such as autonomous driving, pedestrian recognition, pathological detection, etc. Therefore, it is of great significance to study the vulnerability of object detection models. Attacks against object detection models are more difficult because it combines multi-object localization and multi-object classification problems at the same time. In this paper, we study some adversarial attacks against object detection models, and we propose an asterisk-shaped Adversarial patch generation algorithms that make current state-of-the-art object detectors undetectable. Extensive experimental results show that our method can ensure good attack performance while modifying a small number of image pixels.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call