Abstract

This paper investigates the distributed optimization problem over multi-agent networks, in which the target of agents is to collaboratively optimize the sum of all local objective functions. Each local objective function is uniquely known by a single agent. We concentrate on the scenario where communication among agents is portrayed as directed graphs. Based on the exact first order method, a fully distributed optimization algorithm is proposed to deal with the optimization problem. The proposed algorithm utilizes row-stochastic matrices and uncoordinated step-sizes, which exactly drives all agents to converge to the global optimization solution. Under the assumptions that the global objective function is strong convex and the local objective functions have Lipschitz continuous gradient, we show that the proposed algorithm linearly converges to the global optimization solution as long as the maximum step-size of agents does not exceed an explicitly characterized upper bound. Finally, numerical experiments are presented to demonstrate the correctness of theoretical analysis.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call