Federated learning, which enables decentralized training across multiple devices while maintaining data privacy, is susceptible to Byzantine poisoning attacks. This paradigm reduces the need for centralized data storage and transmission, thereby mitigating privacy risks associated with traditional data aggregation methods. However, FL introduces new challenges, notably susceptibility to Byzantine poisoning attacks, where rogue participants can tamper with model updates, threatening the consistency and security of the aggregated model. Our approach addresses this vulnerability by implementing robust aggregation methods, sophisticated pre-processing techniques, and a novel Byzantine grade-level detection mechanism. We introduce a federated aggregation operator designed to mitigate the impact of malicious clients. Our pre-processing includes data loading and transformation, data augmentation, and feature extraction using SIFT and wavelet transforms. Additionally, we employ differential privacy and model compression to improve the robustness and performance of the federated learning framework. Our approach is assessed using a tailored neural network model applied to the MNIST dataset, achieving 97% accuracy in detecting Byzantine attacks. Our results demonstrate that robust aggregation significantly improves the resilience and performance. This comprehensive approach ensures the integrity of the federated learning process, effectively filtering out adversarial influences and sustaining high accuracy even when faced with adversarial Byzantine clients.
Read full abstract