Abstract

Network intrusion detection methods based on federated learning (FL) and edge computing have great potential for protecting the cybersecurity of the Internet of Things. It overcomes the disadvantages of the traditional centralized method, such as high latency, overloaded network, and privacy leakage. At the same time, it can combine private data from multiple participants to train models, and the rich data can train more effective models. However, the inherent security vulnerabilities of the FL framework do not ensure the robustness of the global models trained collaboratively. Towards FL, each participant has access to model parameters and training data, and malicious participants can affect the global model by tampering with data or weights. This paper studies label-flipping attacks in FL-based IoT intrusion detection. We propose a lightweight detection mechanism to mitigate the impact of poisoning attacks on FL-based intrusion detection methods in IoT networks. The detection mechanism on a central server filters anomalous participants and excludes their uploaded models from the global model aggregation. Specifically, we propose a scoring mechanism for evaluating participants based on the loss of the local model and the training dataset size. Afterwards, the Manhattan similarity between each participant will be calculated according to the scores. Finally, the anomalous participants will be found by clustering algorithm for similarity cluster analysis. The experimental results show that our proposed detection method can defend against label-flipping attacks in FL. On the CIC-IDS-2017 dataset, our method can improve the accuracy of the intrusion detection model trained based on FL from 84.3% to 97.1%, while enhancing the protection of IoT network security.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call