Abstract

In spite of unique advantages like higher recognition accuracy and better generalization capability, Automatic Modulation Classification (AMC) oriented Deep Neural Networks (ADNNs) are still vulnerable to adversarial examples (AEs). Recent results revealed that an attacker can easily fool ADNNs through adding a small and imperceptible perturbation to the original signal. Among different AE generation methods, Universal Adversarial Perturbation (UAP) has unique characteristics including input-agnostic and shift-invariance. However, applying UAP directly to RF signals faces three main challenges, i.e., perturbation neutralization, high perceptibility, and dependency of original signals. In this backdrop, a novel Universal Adversarial Perturbation under Frequency and Data constraints (UAP-FD) attack is put forward for solving these problems in this paper. First, an individual perturbation is filtered based on the representation visualization algorithm to counter the neutralization problem in perturbation integration. Second, the high-frequency components in the integrated UAP is eliminated through signal decomposition and reconstruction for promoting the imperceptibility. Third, a proxy signal generation method is proposed to help UAP-FD adapt to data-free black-box settings. A series of experiments is conducted to evaluate the aggressiveness and imperceptibility of UAP-FD attack in different settings on a public dataset. Results show that, compared with existing proposal, UAP-FD has a 40% higher fooling rate, and it can reduce the accuracy of the ADNN model from 83% to 9%, while maintaining a good imperceptibility and shift-invariance property. In addition, UAP-FD is applied to real world captured signals over the transmission channel; and it can reduce the model accuracy from 98.3% to 12.5%.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call