Abstract

Federated learning has drawn widespread attention as privacy-preserving solution, which has a protective effect on data security and privacy. It has unique distributed machine learning mechanism, namely model sharing instead of data sharing. However, the mechanism also leads to the fact that malicious clients can easily train local model based on poisoned data and upload it to the server for contaminating the global model, thus severely hampering the development of federated learning. In this paper, we build a federated learning system and simulate heterogeneous data on each client for training. Although we cannot directly differentiate malicious customers by the uploaded model in a heterogeneous data environment, by experiments we found some features that are used to distinguish malicious customers from benign customers during training. Given above, we propose a federated learning poisoning attack detection method for detecting malicious clients and ensuring aggregation quality. The method can filter out anomaly models by comparing the similarity of the historical changes of clients and gradually identifying attacker clients through reputation mechanism. We experimentally demonstrate that the method significantly improves the performance of the global model even when the proportion of malicious clients is as high as one-third.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.