Abstract

Adversarial attacks have received great attentions for their capacity to distract state-of-the-art neural networks by modifying objects in physical domain. Patch-based attack especially have got much attention for its optimization effectiveness and feasible adaptation to any objects to attack neural network-based object detectors. However, despite their strong attack performance, generated patches are strongly perceptible for humans, violating the fundamental assumption of adversarial examples. In this paper, we propose a camouflaged adversarial patch optimization method using military camouflage assessment metrics for naturalistic patch attacks. We also investigate camouflaged attack loss functions, applications of various camouflaged patches on army tank images, and validate the proposed approach with extensive experiments attacking Yolov5 detection model. Our methods produce more natural and realistic looking camouflaged patches while achieving competitive performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call