Abstract

Target recognition based on a high-resolution range profile (HRRP) has always been a research hotspot in the radar signal interpretation field. Deep learning has been an important method for HRRP target recognition. However, recent research has shown that optical image target recognition methods based on deep learning are vulnerable to adversarial samples. Whether HRRP target recognition methods based on deep learning can be attacked remains an open question. In this paper, four methods of generating adversarial perturbations are proposed. Algorithm 1 generates the nontargeted fine-grained perturbation based on the binary search method. Algorithm 2 generates the targeted fine-grained perturbation based on the multiple-iteration method. Algorithm 3 generates the nontargeted universal adversarial perturbation (UAP) based on aggregating some fine-grained perturbations. Algorithm 4 generates the targeted universal perturbation based on scaling one fine-grained perturbation. These perturbations are used to generate adversarial samples to attack HRRP target recognition methods based on deep learning under white-box and black-box attacks. The experiments are conducted with actual radar data and show that the HRRP adversarial samples have certain aggressiveness. Therefore, HRRP target recognition methods based on deep learning have potential security risks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call