Abstract

The distributed nature of Federated Learning (FL) introduces security vulnerabilities and issues related to the heterogeneous distribution of data. Traditional FL aggregation algorithms often mitigate security risks by excluding outliers, which compromises the diversity of shared information. In this paper, we introduce a novel filtering-and-voting framework that adeptly navigates the challenges posed by non-iid training data and malicious attacks on FL. The proposed framework integrates a filtering layer for defensive measures against the intrusion of malicious models and a voting layer to harness valuable contributions from diverse participants. Moreover, by employing Deep Reinforcement Learning (DRL) for dynamic aggregation weight adjustment, we ensure the optimized aggregation of participant data, enhancing the diversity of information used for aggregation and improving the performance of the global model. Experimental results demonstrate that the proposed framework presents superior accuracy over traditional and contemporary FL aggregation methods as diverse models are utilized. It also shows robust resistance against malicious poisoning attacks.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.