Abstract

In recent years, deep learning security has become a hot research topic, and attackers often use well-designed adversarial examples as input to trick the deep learning model, especially in object detection. Recently, the adversarial patch has been widely used in the attack of the object detector by adding a continuous region of noise to the image. However, current research on the adversarial patch for object detectors has certain limitations. The adversarial attack performance tends to degrade when certain regions of the adversarial patch are accidentally obscured. So the two-stage method based on Generative Adversarial Network (TS-GAN) is proposed to address this issue. To train the generator, the TS-GAN uses the occlusion rule to simulate various situations where patches are occluded in physical scenes. Then the generator's parameters are fixed, and the optimization method is adopted to select the latent variables to generate the most effective adversarial patches. The results of extensive experiments in the digital world and physical environments show that our method achieves stable attack performance under complex conditions which are different occlusion.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.