Abstract

The existence of physical-world adversarial examples such as adversarial patches proves the vulnerability of real-world deep learning systems. Therefore, it is essential to develop efficient adversarial attack algorithms to identify potential risks and build a robust system. The patch-based physical adversarial attack has shown its effectiveness in attacking neural network-based object detectors. However, the generated patches are quite perceptible for humans, violating the fundamental assumption of adversarial examples. In this work, we present task-specific loss functions that can generate imperceptible adversarial patches based on camouflaged patterns. First, we propose a constrained optimization method with two camouflage assessment metrics to quantify camouflage performance. Then, we show the regularization with those metrics can help generate the adversarial patches based on camouflage patterns. Furthermore, we validate our methods with various experiments and show that we can generate natural-style camouflaged adversarial patches with comparable attack performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call