Abstract

This article demonstrates that nonmaximum suppression (NMS), which is commonly used in object detection (OD) tasks to filter redundant detection results, is no longer secure. Considering that NMS has been an integral part of OD systems, thwarting the functionality of NMS can result in unexpected or even lethal consequences for such systems. In this article, an adversarial example attack that triggers malfunctioning of NMS in OD models is proposed. The attack, namely, Daedalus, compresses the dimensions of detection boxes to evade NMS. As a result, the final detection output contains extremely dense false positives. This can be fatal for many OD applications, such as autonomous vehicles and surveillance systems. The attack can be generalized to different OD models, such that the attack cripples various OD applications. Furthermore, a way of crafting robust adversarial examples is developed by using an ensemble of popular detection models as the substitutes. Considering the pervasive nature of model reuse in real-world OD scenarios, Daedalus examples crafted based on an ensemble of substitutes can launch attacks without knowing the parameters of the victim models. The experimental results demonstrate that the attack effectively stops NMS from filtering redundant bounding boxes. As the evaluation results suggest, Daedalus increases the false positive rate in detection results to 99.9% and reduces the mean average precision scores to 0, while maintaining a low cost of distortion on the original inputs. It also demonstrates that the attack can be practically launched against real-world OD systems via printed posters.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call