Abstract

Contemporarily, many deep learning methods have been generated for weapon detection. The weapon detection technology could be used in investigating violent cases. However, the existing gun detection models lack adversarial attack verification for special types of firearms and special picture samples. This study investigates the efficiency of Fast Gradient Sign Method(FGSM) adversarial attack in the field of weapon detection and the influence of weapon category on the attacks result. The dataset is scraped from IMDBF.com and the model being attacked is MobileNetV2, created by HeeebsInc in 2020. As a result, using FGSM methods, adversarial samples generated in film and television graphics containing pistols and rifles can effectively decrease the accuracy of the weapon detection model above. Besides, it is observed the difference of eps needed in attacking different types of gun graphics like film pictures and collection photos. These results verify that some weapon detection models have weak anti-interference, which may provide some ideas for future attacks like BIM or PGD attack.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call