Abstract

In this article, the problem of the distributed convex optimization is investigated, where the target is to collectively minimize a sum of local convex functions over an unbalanced directed multiagent network. Each agent in the network possesses only its private local objective function, and the sum of all local objective functions constitutes the global objective function. We particularly consider the scenario, where the underlying interaction network is strongly connected and the relevant weight matrix is row stochastic. To collectively figure out the optimization problem, a distributed accelerated convergence algorithm where agents utilize uncoordinated step-sizes is presented by incorporating consensus of multiagent networks into distributed inexact gradient tracking technique. Most of the existing methods require all agents to possess the out-degree information of their in-neighbors, which is impractical and hardly inevitable as interpreted in this article. By utilizing the small-gain theorem, we prove that if the maximum step-size is positive and sufficiently small (constrained by a specific upper bound), the proposed algorithm, termed as SGT-FROST, converges geometrically to the optimal solution given that the objective functions are smooth and strongly convex. A certain convergence rate is also shown. Simulations confirm the findings in this article.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call