Abstract

In the past few years, deep learning have attracted increasing attention for HRRP-based radar automatic target recognition(RATR) because of their powerful ability to learn features from training data automatically. However, recent studies show that deep learning models are vulnerable to adversarial examples. In this paper, we verified adversarial examples also exist in the deep learning based HRRP target recognition. A novel adversarial attack algorithm called Robust HRRP Attack(RHA) is proposed to generate robust adversarial perturbations in realworld. Experimental results on measured HRRP data show that RHA decrease HRRP recognition performance significantly which indicate our method is efficient and robust.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call