Abstract

The wide application of contour stellar images has helped researchers transform signal classification problems into image classification problems to solve signal recognition based on deep learning. However, deep neural networks (DNN) are quite vulnerable to adversarial examples, thus simply evaluating the adversarial attack performance on the signal sequence recognition model cannot meet the current security requirements. From the perspective of an attacker, this study converts individual signals into stellar contour images, and then generates adversarial examples to evaluate the adversarial attack impacts. The results show that whether the current input sample is a signal sequence or a converted image, the DNN is vulnerable to the threat of adversarial examples. In the selected methods, whether it is under different perturbations or signal-to-noise ratio (SNRs), the momentum iteration method has the best performance among them, and under the perturbation of 0.01, the attack performance is more than 10% higher than the fast gradient sign method. Also, to measure the invisibility of adversarial examples, the contour stellar images before and after the attack were compared to maintain a balance between the attack success rate and the attack concealment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call