Abstract

Federated learning is a privacy-preserving machine learning paradigm that can train a model with decentralized data. Classical federated learning systems are vulnerable to attacks from malicious clients. Although a number of efforts have been made to improve robustness of federated learning, existing solutions are either too costly or attack-specific. In this paper, we propose the Robust Ternary Gradients Aggregation (RTGA) algorithm, which can efficiently handle different attacks using two novel mechanisms. At the client-side, the ternary quantization mechanism compresses gradients using only two bits to store each coordinate of the gradient vector. We adopt the error-feedback mechanism to compensate for the difference between the compressed and actual gradients. At the server-side, the robust aggregation mechanism is designed to detect malicious clients according to the risk score of each client. The risk score is calculated by solving an optimization problem of the divergence among clients. Furthermore, we provide analysis of the compression and robustness properties of our proposed algorithm. Extensive experiments show that, compared with existing algorithms, RTGA can mitigate different types of attacks while costing much less communications.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.