Abstract

Object detection is a hot topic in computer vision (CV), and it has many applications in various security fields. However, many works have demonstrated that neural network-based object detection is vulnerable to adversarial attacks. In this paper, we study adversarial attacks on object detectors in the real world and propose a new adversarial attack called Misleading Attention and Classification Attack (MACA), which can generate adversarial patches to mislead the object detectors. Specifically, we propose a new scheme to generate adversarial patches to fool the object detector. Our scheme restricts the noise of the adversarial patches and aims to ensure that the generated adversarial patches are visually similar to natural images. The attack simulates the complex external physical environment and the 3D transformations of non-rigid objects to increase the robustness of adversarial patches. We attack the up-to-date object detectors (e.g., Yolo-V5), and we prove that our technique has strong transferability among different detectors. Extensive experiments show that it is feasible to transfer the digital adversarial patches to the real world while maintaining the transferability of adversarial patches among different models and the success rate of adversarial attacks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call