Abstract

This paper considers the distributed optimization problem over a network, where the objective is to optimize a global function formed by an average of local functions, using only local computation and communication. We develop an Accelerated Distributed Nesterov Gradient Descent (Acc-DNGD) method for convex and smooth objective functions. We show that it achieves a O(1/t1.4-e) (∀ e e (0,1.4)) convergence rate when a vanishing step size is used. The convergence rate can be improved to O(1/t2) when we use a fixed step size and the objective functions satisfy a special property. To the best of our knowledge, Acc-DNGD is the fastest among all distributed gradient-based algorithms that have been proposed so far.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call