Abstract

Recently, decentralized optimization (DO) has received widespread attentions. Since the scale of DO is usually very large, research on acceleration has become a hotspot. However, the error caused by the global information deficiency for each agent becomes the key problem cumbering the existing elaborately designed accelerated methods from being applied to DO, directly. On the other hand, recent studies illustrate that a group of accelerated methods can be covered from a viewpoint of momentum. In this paper, we propose to follow this methodology to design accelerated algorithms and adapt them to DO over time-varying directed networks, of which the main benefit is that, because the proposed algorithms are derived from momentum, they not only avoid the design of elaborate iterative structures but also inherit the physical interpretability of momentum. Furthermore, we show that our proposed algorithms can achieve sharper convergence rates than their competitors under the same condition. In the end, experiments on a number of benchmark datasets validate well the competitiveness of our algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call