Abstract

Despite their tremendous success in various machine learning tasks, deep neural networks (DNNs) are inherently vulnerable to adversarial examples, which are maliciously crafted inputs to cause DNNs to misbehave. Intensive research has been conducted on this phenomenon in simple tasks (e.g., image classification). However, little is known about this adversarial vulnerability for object detection, a much more complicated task, which often requires specialized DNNs and multiple additional components. In this paper, we present DetectSec, a uniform platform for robustness analysis of object detection models. Currently, DetectSec implements 13 representative adversarial attacks with 7 utility metrics and 13 defenses on 18 standard object detection models. Leveraging DetectSec, we conduct the first rigorous evaluation of adversarial attacks on the state-of-the-art object detection models. We analyze the impact of the factors including DNN architecture and capacity on the model robustness. We show that many conclusions about adversarial attacks and defenses in image classification tasks do not transfer to object detection tasks, for example, the targeted attack is stronger than the untargeted attack for two-stage detectors. Our findings will aid future efforts in understanding and defending against adversarial attacks in complicated tasks. In addition, we compare the robustness of different detection models and discuss their relative strengths and weaknesses. The platform DetectSec will be open source as a unique facility for further research on adversarial attacks and defenses in object detection tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call