Abstract

Recent studies have shown the vulnerability of deep neural networks (DNNs) to adversarial examples induced by added imperceptible perturbations to the input images. Most of the attack studies focused on target classifiers, but few aimed at target detectors that are more challenging. This paper proposes a more natural and robust adversarial attack scheme against practical object detectors. First, we extract the target area through image semantic segmentation, and perturbations are only added to the extracted target area to generate more practical adversarial examples. Then, style transfer is adopted to make the generated adversarial examples more natural-looking. Finally, we improve our robustness against adversarial examples by simulating angle, lighting, distance, and background changes. Our generated adversarial examples have proved more successful and robust in adversarial attacks than other methods. This paper sheds light on the potential threat of adversarial examples to the real physical world and hopefully provides guidance for defending against such natural and robust adversarial example attacks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call