Abstract

The research on attack transferability is of great importance as it can guide how to conduct an adversarial attack without knowing any information about target models. However, it remains challenging for adversarial examples to maintain a good attack transferability performance, especially for the black-box attack implemented in the physical world. To enhance black-box transferability of physical attacks on object detectors, we present a novel adversarial learning method to produce adversarial patches by redistributing separable attention maps. Concretely, we first develop smoothed multilayer attention maps by introducing serial composite transformations, which could suppress model-specific noise on the one hand, and cover objects to be concealed at various resolutions on the other hand. Besides, our method resorts to a scalable mask to separate object attention from the background and adjust their distribution with a novel loss function. Extensive experiments show that our approach outperforms state-of-the-art methods in both the digital space and the physical world. Our code is available at https://github.com/zhangyu13a/transPhyAtt.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call