Abstract
Target recognition based on a high-resolution range profile (HRRP) has always been a research hotspot in the radar signal interpretation field. Deep learning has been an important method for HRRP target recognition. However, recent research has shown that optical image target recognition methods based on deep learning are vulnerable to adversarial samples. Whether HRRP target recognition methods based on deep learning can be attacked remains an open question. In this paper, four methods of generating adversarial perturbations are proposed. Algorithm 1 generates the nontargeted fine-grained perturbation based on the binary search method. Algorithm 2 generates the targeted fine-grained perturbation based on the multiple-iteration method. Algorithm 3 generates the nontargeted universal adversarial perturbation (UAP) based on aggregating some fine-grained perturbations. Algorithm 4 generates the targeted universal perturbation based on scaling one fine-grained perturbation. These perturbations are used to generate adversarial samples to attack HRRP target recognition methods based on deep learning under white-box and black-box attacks. The experiments are conducted with actual radar data and show that the HRRP adversarial samples have certain aggressiveness. Therefore, HRRP target recognition methods based on deep learning have potential security risks.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.