Abstract

In this paper, we introduce an asynchronous decentralized accelerated stochastic gradient descent type of algorithm for decentralized stochastic optimization. Considering communication and synchronization costs are the major bottlenecks for decentralized optimization, we attempt to reduce these costs from an algorithmic design aspect, in particular, we are able to reduce the number of agents involved in one round of update via randomization. Our major contribution is to develop a class of accelerated randomized decentralized algorithms for solving general convex composite problems. We establish <i xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">O</i> (1/ϵ) (resp., <i xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">O</i> (1/√{ϵ})) communication complexity and <i xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">O</i> (1/ϵ <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> ) (resp., <i xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">O</i> (1/ϵ)) sampling complexity for solving general convex (resp., strongly convex) problems. It worths mentioning that our proposing algorithm only sublinear depends on the Lipschitz constant if there is a smooth component presented in the objective function. Moreover, we also conduct some preliminary numerical experiments to demonstrate the advantages of our proposing algorithms over the state-of-the-art synchronous decentralized algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call