Abstract

Adversarial attacks have stimulated research interests in the field of deep learning security. However, most of existing adversarial attack methods are developed on classification. In this paper, we use Projected Gradient Descent (PGD), the strongest first-order attack method on classification, to produce adversarial examples on the total loss of Faster R-CNN object detector. Compared with the state-of-the-art Dense Adversary Generation (DAG) method, our attack is more efficient and more powerful in both white-box and black-box attack settings, and is applicable in a variety of neural network architectures. On Pascal VOC2007, under white-box attack, DAG has 5.92% mAP on Faster R-CNN with VGG16 backbone using 41.42 iterations on average, while our method achieves 0.90% using only 4 iterations. We also analyze the difference of attacks between classification and detection, and find that in addition to misclassification, adversarial examples on detection also lead to mis-localization. Besides, we validate the adversarial effectiveness of both Region Proposal Network (RPN) and Fast R-CNN loss, the components of the total loss. Our research will provide inspiration for further efforts in adversarial attacks on other vision tasks.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.