Abstract

Federated learning (FL) is an emerging paradigm for distributed machine learning that uses the data and the computational power of user devices while maintaining user privacy (e.g., position and motion track). It has been proved a promising way to help the learning-based millimeter wave (mmWave) system achieve efficient link configuration. However, FL systems have an inherent vulnerability to backdoor attacks during training, and this has not received attention in current FL-based beam selection research. The goal of a backdoor attacker is to implant a backdoor in the model such that at test time, the model will mispredict a certain family of inputs, and corrupt the performance of the trained model on specific sub-tasks. We study backdoor attacks in an FL-based beam selection system based on a deep neural network that utilizes user location information. Specifically, we propose a backdoor attack scheme that can be configured in the real world. The attacker’s trigger is an obstacle placed in certain locations. When the model encounters an input with these obstacles, the backdoor will be triggered, and the model will output the beam specified by the attacker. Through experiments, we show that the proposed attack can achieve a high attack success rate in a system without a defense mechanism. Moreover, we show that the traditional norm-clipping defense method cannot effectively defend against our attack. Furthermore, we propose a new backdoor attack defense method and verify the effectiveness of this scheme through experiments. In addition, we propose a backdoor detection method: the federated noise titration method, which can diagnose whether the model has a backdoor. Overall, our work explored backdoor attacks, defenses, and detection of the FL-based mmWave beam selection system.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call