Abstract

The research on attack transferability is of great importance as it can guide how to conduct an adversarial attack without knowing any information about target models. However, it remains challenging for adversarial examples to maintain a good attack transferability performance, especially for the black-box attack implemented in the physical world. To enhance black-box transferability of physical attacks on object detectors, we present a novel adversarial learning method to produce adversarial patches by redistributing separable attention maps. Concretely, we first develop smoothed multilayer attention maps by introducing serial composite transformations, which could suppress model-specific noise on the one hand, and cover objects to be concealed at various resolutions on the other hand. Besides, our method resorts to a scalable mask to separate object attention from the background and adjust their distribution with a novel loss function. Extensive experiments show that our approach outperforms state-of-the-art methods in both the digital space and the physical world. Our code is available at https://github.com/zhangyu13a/transPhyAtt.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.