The modulation classification of radar, communication, and other radiation source signals under complex electromagnetic conditions is an important task in the field of military electronic countermeasures. Deep learning is an important tool for automatic modulation classification (AMC) tasks. However, AMC models based on deep neural networks are easily attacked by adversarial methods such as gradient attacks, and even under small perturbations, the model may exhibit significant differences in prediction. To address adversarial attacks, we proposed a Ulam stability adversarial training method, which could improve the robust accuracy and the natural accuracy of the adversarial training model. We introduced a class of Jensen type Ulam stability theorem, which transforms the Jensen condition into a special class of stability preserving regularization terms for adversarial training models, enhancing the model's adversarial robustness. More importantly, our method provided a framework for improving the stability of adversarial training, which can be combined with existing adversarial training methods to effectively enhance their adversarial defense capabilities. In the experimental section, we compared four types of adversarial training methods on three types of modulation recognition datasets, and the results showed that Ulam adversarial training improved robust accuracy by an average of 4.8% and natural accuracy by an average of 2.2% on three datasets. The difference between the best and final accuracies of the models decreased by an average of 4.5%.
Read full abstract