Abstract

Deep neural networks have been demonstrated as the rule of thumb in various industrial applications. However, they are also verified vulnerable to adversarial examples that are designed to make models predict erroneously. In this paper, we aim to launch the systematic evaluation of adversarial attacks for object detection, mainly with Faster R-CNN and YOLO models. They are important milestones in the field of object detection and represent the state-of-the-arts (SOTA) for the one-stage and two-stage detection paradigms. First, we comprehensively analyze several popular adversarial attacks and the principles will be discussed in a common scope. Then the attack performances will be evaluated comparatively by indices like mean average precision (MAP), time cost, attack iteration number and the distortion. Following the attack methods, we further investigate the resistance of these adversaries against some common defense methods. Finally, some perspectives are elaborated on the challenge of adversarial attack and defense for object detectors.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.