Abstract

Object detection, a fundamental element of computer vision and artificial intelligence, has experienced considerable advancements through the incorporation of deep learning-based techniques. Yet, despite the impressive strides in both accuracy and efficiency, object detection algorithms harbor inherent vulnerabilities to adversarial attacks. These well-crafted disruptions pose significant risks, especially considering the broad application of object detection across an array of safety-critical sectors such as autonomous transportation, medical imaging, and security systems. This comprehensive paper offers a thorough review of adversarial attacks against object detection systems, dissecting the methods employed, and scrutinizing the implications of their exploits. It dives deep into the mechanics and consequences of both white-box and black-box attacks on prevalent object detection networks, including but not limited to Faster R-CNN, YOLO, and SSD. Furthermore, this paper underscores an assortment of defense strategies developed to mitigate the effects of adversarial attacks. These include adversarial training, gradient masking, input transformations, and randomized defenses. While these strategies hold promise, it is acknowledged that they have their limitations and do not offer a universal solution against all adversarial attacks. As such, this paper underscores the urgent necessity for robust defense mechanisms and stimulates further discourse and investigation into developing truly resilient object detection systems, capable of withstanding the growing threat of adversarial attacks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call