Abstract

In large-scale distributed learning, the direct application of traditional inference is often not feasible, because it may contain multiple themes, such as communication costs, privacy issues, and Byzantine failures. Nowadays, the internet is vulnerable to attacks, and Byzantine failures frequently occur. For copying with Byzantine failures, the paper develops two Byzantine-robust distributed learning algorithms under a framework of communication-efficient surrogate likelihood. In our algorithms, we adopt the δ-approximate compressors, including sign-based operator and topk sparsification, to improve communication efficiency, and an unsophisticated thresholding of local gradient norms to guard against Byzantine failures. For accelerating convergence and achieving an optimal statistical error rate, error feedback is exploited in the second algorithm. The two algorithms are robust to arbitrary adversaries, although Byzantine workers don't adhere to the mandated compression mechanism. We explicitly establish statistical error rates, which imply that our algorithms don't sacrifice the quality of learning, and attain the order-optimal under some settings. In addition, we provide a trade-off between compression and adversary in the presence of Byzantine worker machines. Extensive numerical experiments validate our theoretical results and demonstrate a good performance of our algorithms.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.