Abstract
Deep learning techniques can classify spectrum phenomena ( e.g. , waveform modulation) with accuracy levels that were once thought impossible. Although we have recently seen many advances in this field, extensive work in computer vision has demonstrated that adversarial machine learning (AML) can seriously decrease the accuracy of a classifier . This is done by designing inputs that are close to a legitimate one but interpreted by the classifier as being of a completely different class. On the other hand, it is unclear if , when , and how AML is concretely possible in practical wireless scenarios, where (i) the highly time-varying nature of the channel could compromise adversarial attempts; and (ii) the received waveforms still need to be decodable and thus cannot be extensively modified. This paper advances the state of the art by proposing the first comprehensive analysis and experimental evaluation of adversarial learning attacks to wireless deep learning systems. We postulate a series of adversarial attacks, and formulate a Generalized Wireless Adversarial Machine Learning Problem (GWAP) where we analyze the combined effect of the wireless channel and the adversarial waveform on the efficacy of the attacks. We propose a new neural network architecture called FIRNet , which can be trained to “hack” a classifier based only on its output. We extensively evaluate the performance on (i) a 1000-device radio fingerprinting dataset, and (ii) a 24-class modulation dataset. Results obtained with several channel conditions show that our algorithms can decrease the classifier accuracy up to 3x. We also experimentally evaluate FIRNet on a radio testbed , and show that our data-driven blackbox approach can confuse the classifier up to 97% while keeping the waveform distortion to a minimum.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have