Abstract

Deep learning plays a critical role in the applications of artificial intelligence. The trend of processing images or videos as input data and pursuing execution efficiency in practical applications is unstoppable. However, the vulnerability due to the complex structure of deep networks makes it at risk of attacks. Object detection, as the significant product impacted by the deep learning frame, corresponds to this weakness implicated by its multiple-tasks property. Besides, these applications involving object detection techniques are integrated deeply into our lives, potentially leading to unimaginable loss. Adversarial example attacks, as the mainstream attack method, provide an efficacious and comprehensible idea to generate perturbation. In this survey, we review the existing adversarial example attacks in object detection tasks and inductively discuss the similarity and differences among these approaches. Finally, we construct this survey for discussing the attacks in the object detection field and point out the possible direction for adversarial defenses in future studies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call