Abstract

In this paper, we consider distributed optimization problems where the goal is to minimize a sum of objective functions over a multi-agent network. We focus on the case when the inter-agent communication is described by a strongly-connected, \emph{directed} graph. The proposed algorithm, ADD-OPT (Accelerated Distributed Directed Optimization), achieves the best known convergence rate for this class of problems,~$O(\mu^{k}),0<\mu<1$, given strongly-convex, objective functions with globally Lipschitz-continuous gradients, where~$k$ is the number of iterations. Moreover, ADD-OPT supports a wider and more realistic range of step-sizes in contrast to existing work. In particular, we show that ADD-OPT converges for arbitrarily small (positive) step-sizes. Simulations further illustrate our results.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.