ABSTRACT Federated learning (FL) is a promising approach for distributed training of deep neural networks within Internet of Things (IoT) environments, where the data generated by IoT devices stays local, and only model updates are communicated to a central server. This methodology is particularly relevant for intrusion detection systems in IoT networks, where security is paramount. However, the decentralized nature of FL introduces vulnerabilities, such as the risk of data poisoning by malicious participants. In this paper, we propose a hierarchical federated learning to reduce communication overhead and improve privacy by limiting data spread. We further explore the impact of label-flipping attacks on hierarchical FL systems used in IoT-based intrusion detection. We focus on scenarios where a subset of malicious participants attempts to degrade the global model’s performance by submitting corrupted model updates based on intentionally mislabeled data. Our findings reveal significant decreases in classification accuracy by 10.53% and recall rates, even with a minimal number of compromised participants, primarily affecting the specific classes targeted by the attackers. We further examine how the availability of these malicious nodes influences the attack’s success. To counteract these threats, we introduce a defense mechanism that successfully identifies all malicious clients and mitigates their impact. As a result, the global model’s accuracy is maintained at the original 95% level found during training without the presence of malicious clients, thereby enhancing the resilience of federated learning models in IoT security applications.
Read full abstract