Federated Learning (FL) is a privacy-preserving approach for training deep neural networks across decentralized devices without sharing raw data. Thus, FL has been popularly applied in domains like anomaly detection in Internet of Things (IoTs). However, IoT networks/ devices have limited protection capabilities, resulting in the vulnerability of FL to data poisoning attacks. In order to address this challenge, we propose a new robust FL system designed to counter data poisoning attacks. Our approach, named as Federated Learning with Attention Aggregation (FedAA), leverages AutoEncoder (AE) models for local anomaly detection in IoT networks. In FedAA, we design to aggregate the global model from local models using a novel aggregation method, named as Attention Aggregation (AA). This method is specifically designed to mitigate the impact of data poisoning attacks, which often lead to high values of the loss functions in the local models. More precisely, the local models with high loss values are assigned lower attention weights when contributing to the global model aggregation, and vice versa. As a result, the proposed AA method enhances the robustness of FedAA against data poisoning attacks. We have conducted extensive experiments on three datasets, i.e., N-BaIoT, NSL-KDD, and UNSW, of IoT anomaly detection. The results show that FedAA is more robust than other FL systems in mitigating data poisoning attacks.
Read full abstract