Abstract

This paper studies a class of distributed optimization algorithms by a set of agents, where each agent only has access to its own local convex objective function, and the goal of the agents is to jointly minimize the sum of all the local functions. The communications among agents are described by a sequence of time-varying directed graphs which are assumed to be uniformly strongly connected. A column stochastic mixing matrices is employed in the algorithm which exactly steers all the agents to asymptotically converge to a global and consensual optimal solution even under the assumption that the step-sizes are uncoordinated. Two fairly standard conditions for achieving the geometrical convergence rate are established under the assumption that the objective functions are strong convexity and have Lipschitz continuous gradient. The theoretical analysis shows that the distributed algorithm is capable of driving the whole network to geometrically converge to an optimal solution of the convex optimization problem as long as the uncoordinated step-sizes do not exceed some upper bound. We also give an explicit analysis for the convergence rate of our algorithm through a different approach. Finally, simulation results illustrate the feasibility of the proposed algorithm and the effectiveness of the theoretical analysis throughout this paper.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.