Abstract

Given the vulnerability of deep neural network to adversarial attacks, the application of deep learning in the wireless physical layer arouses comprehensive security concerns. In this paper, we consider an autoencoder-based communication system with a full-duplex (FD) legitimate receiver and an external eavesdropper. It is assumed that the system is trained from end-to-end based on the concepts of autoencoder. The FD legitimate receiver transmits a well-designed adversary perturbation signal to jam the eavesdropper while receiving information simultaneously. To defend the self-perturbation from the loop-back channel, the legitimate receiver is re-trained with the adversarial training method. The simulation results show that with the scheme proposed in this paper, the block-error-rate (BLER) of the legitimate receiver almost remains unaffected while the BLER of the eavesdropper is increased by orders of magnitude. This ensures reliable and secure transmission between the transmitter and the legitimate receiver.

Highlights

  • The communication systems are usually described by various theories and mathematical models from information theory

  • In the deep learning (DL) based communication systems such as the autoencoder based wireless communication system considered in this paper, if a malicious node transmits a well-designed perturbation signal sought in the feature space, erroneous predictions of the classification models will be caused, since the deep neural networks (DNNs) are highly vulnerable to adversarial attacks [5–8]

  • Simulation results show that the BLER of the eavesdropper is increased by orders of magnitude, while the BLER of the legitimate receiver is almost unchanged. These results indicate the potential of the proposed anti-attacking and anti-eavesdropping autoencoder communication system in both reliable and secure transmission

Read more

Summary

Introduction

The communication systems are usually described by various theories and mathematical models from information theory. In the DL based communication systems such as the autoencoder based wireless communication system considered in this paper, if a malicious node transmits a well-designed perturbation signal sought in the feature space, erroneous predictions of the classification models will be caused, since the DNNs are highly vulnerable to adversarial attacks [5–8]. This raises security and robustness concerns about the applications of deep learning in the physical layer.

Related Work
System Model
Adversarial
Adversarial Attack
Adversarial Training
Results
Figures to show showthe theBLER
BLER versus
Figures and
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call