Abstract
Communication between workers and the master node to collect local stochastic gradients is a key bottleneck in a large-scale distributed learning system. Various recent works have proposed to compress the local stochastic gradients to mitigate the communication overhead. However, robustness to malicious attacks is rarely considered in such a setting. In this work, we investigate the problem of Byzantine-robust compressed distributed learning, where the attacks from Byzantine workers can be arbitrarily malicious. We theoretically point out that different to the attacks-free compressed stochastic gradient descent (SGD), its vanilla combination with geometric median-based robust aggregation seriously suffers from the compression noise in the presence of Byzantine attacks. In light of this observation, we propose to reduce the compression noise with gradient difference compression so as to improve the Byzantine-robustness. We also observe the impact of the intrinsic stochastic noise caused by selecting random samples, and adopt the stochastic average gradient algorithm (SAGA) to gradually eliminate the inner variations of regular workers. We theoretically prove that the proposed algorithm reaches a neighborhood of the optimal solution at a linear convergence rate, and the asymptotic learning error is in the same order as that of the state-of-the-art uncompressed method. Finally, numerical experiments demonstrate the effectiveness of the proposed method.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Signal and Information Processing over Networks
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.