Abstract

The design of a security scheme for beamforming prediction is critical for next-generation wireless networks (5G, 6G, and beyond). However, there is no consensus about protecting beamforming prediction using deep learning algorithms in these networks. This paper presents the security vulnerabilities in deep learning for beamforming prediction using deep neural networks in 6G wireless networks, which treats the beamforming prediction as a multi-output regression problem. It is indicated that the initial DNN model is vulnerable to adversarial attacks, such as Fast Gradient Sign Method , Basic Iterative Method , Projected Gradient Descent , and Momentum Iterative Method , because the initial DNN model is sensitive to the perturbations of the adversarial samples of the training data. This study offers two mitigation methods, such as adversarial training and defensive distillation, for adversarial attacks against artificial intelligence-based models used in the millimeter-wave (mmWave) beamforming prediction. Furthermore, the proposed scheme can be used in situations where the data are corrupted due to the adversarial examples in the training data. Experimental results show that the proposed methods defend the DNN models against adversarial attacks in next-generation wireless networks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call