Abstract

This paper aims to solve a distributed learning problem under Byzantine attacks. In the underlying distributed master-worker architecture, there exist a number of unknown but malicious workers that can send arbitrary messages to the master to deviate the learning process, called Byzantine workers. In the literature, a total variation (TV) norm-penalized approximation formulation has been investigated to alleviate the effect of Byzantine attacks. To be specific, the TV norm penalty not only forces the local variables at the regular workers to be close, but is robust to the outliers sent by the Byzantine workers as well. For handling the separable TV norm-penalized approximation formulation, we propose a Byzantine-robust stochastic alternating direction method of multipliers (ADMM). Theoretically, we prove that the proposed method converges to a bounded neighborhood of the optimal solution at a rate of O(1/k) under mild assumptions, where k is the number of iterations and the size of neighborhood is determined by the number of Byzantine workers. Numerical experiments on the MNIST and COVERTYPE datasets further demonstrate the effectiveness of the proposed method to various Byzantine attacks.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.