Abstract

Although Deep Neural Networks (DNNs)-based object detectors are widely used in various fields, especially on aerial imagery object detections, it has been observed that a small elaborately designed patch attached to the images can mislead the DNNs-based detectors into producing erroneous output. However, the target detectors being attacked are quite simple, and the attack efficiency is relatively low in previous works, making it not practicable in real scenarios. To address these limitations, a new adversarial patch attack algorithm is proposed in this paper. Firstly, we designed a novel loss function using the intermediate outputs of the models rather than the model’s final outputs interpreted by the detection head to optimize adversarial patches. The experiments conducted on the DOTA, RSOD, and NWPU VHR-10 datasets demonstrate that our method can significantly degrade the performance of the detectors. Secondly, we conducted intensive experiments to investigate the impact of different outputs of the detection model on generating adversarial patches, demonstrating the class score is not as effective as the objectness score. Thirdly, we comprehensively analyzed the attack transferability across different aerial imagery datasets, verifying that the patches generated on one dataset are also effective in attacking another. Moreover, we proposed ensemble training to boost the attack’s transferability across models. Our work alarms the application of DNNs-based object detectors in aerial imagery.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call