Federated learning (FL) is an innovative distributed learning paradigm that permits multiple parties to train models collaboratively while protecting individual privacy. However, it encounters security challenges, making it vulnerable to several adversarial attacks and leading to compromising model performance. Existing research on FL poisoning attacks and defense techniques tends to be application-specific, primarily emphasizing attack capabilities. However, it fails to consider inherent vulnerabilities in FL and the impact of attack intensity. To our knowledge, no existing work has delved into these issues within a multi-domain FL environment. This paper addresses these concerns by investigating the consequences of targeted label-flipping attacks within FL systems and comprehensively examining the effects of the attacks in single-label, double-label, and triple-label scenarios with different levels of poisoning intensities. Additionally, we investigate the influence of a temporal label-flipping attack, where we study the impact of adversaries available only for specific federated training rounds. Moreover, we propose a novel server-based defense mechanism called SecDefender to detect low-quality models in both IID and non-IID settings of multi-domain environments. Our approach is rigorously evaluated against state-of-the-art alternatives using six benchmark datasets: CIC-Darknet2020, Fashion-MNIST, FEDMNIST, GTSR, HAR, and MNIST. Extensive experiments demonstrate that our proposed SecDefender significantly enhances its performance by over 65% in terms of source class recall, maintaining a low attack success rate. Consequently, there is a 1 to 2% enhancement in global model accuracy compared to existing approaches.
Read full abstract