Abstract

Cloud-edge architecture is an emerging technology that aims to meet the growing demands of intelligent applications. To address the issues of machine learning privacy leakages and benefiting from imbalanced data distribution, federated learning has been widely applied. Nevertheless, they present inherent vulnerabilities that make them vulnerable to poisoning attacks. Existing defense techniques are largely attack-rigid: they are designed to recognize client properties or model updates directly, aimed at specific attack scenarios or rules, but may not work well for critical feature patterns or flexible attack methods, mainly due to the potential influence of redundant features and model performance on defense. Yet few flexible defense methods have been developed. In this paper, we propose FlexibleFL, a flexible defense method against poisoning attacks in cloud-edge federated learning system (CEFL). The key idea of FlexibleFL is to evaluate the quality of uploaded model parameters and further determine the contribution of participants through an optimal threshold selection strategy. Based on these differences, FlexibleFL can thus implement penalties to potential attackers in a way that involves assigning the updated federated model. Extensive results demonstrate that our method has significant advantages in countering poisoning attacks in IID and Non-IID scenarios, and can effectively protect CEFL systems.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.