Abstract

State-of-the-art deep neural network-based models have a loophole that they are prone to adversarial attacks. However, only a few attacks are demonstrated on object detection and these adversarial attacks have a limitation that they require tuning of hyperparameters which is very time-consuming. To address the problem, we propose a simple and computationally efficient technique in terms of the average number of iterations that is, Plug-n-Play Adversarial Attack (PPAA) in which constrained uniform random noises are used to generate perturbations. The proposed method is tested on the Microsoft Common Objects in Context (MSCOCO) dataset, using a state-of-the-art object detection algorithm named RetinaNet. The results show that the proposed PPAA reduced the average number of iterations to 8.64 which is 1/5th of DAG and achieves a comparable convergence rate that is 96.48% while keeping the perturbations quasi-imperceptible to human eyes, that quantifies to 1.2 * 10–2.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call