Abstract

Recently, DL has been exploited in wireless communications such as modulation classification. However, due to the openness of wireless channel and unexplainability of DL, it is also vulnerable to adversarial attacks. In this correspondence, we investigate a so called hidden backdoor attack to modulation classification, where the adversary puts elaborately designed poisoned samples on the basis of IQ sequences into training dataset. These poisoned samples are hidden because it could not be found by traditional classification methods. And poisoned samples are same to samples with triggers which are patched samples in feature space. We show that the hidden backdoor attack can reduce the accuracy of modulation classification significantly with patched samples. At last, we propose activation cluster to detect abnormal samples in training dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call