Abstract

In this paper, we study distributed multiagent optimisation over undirected graphs. The optimisation problem is to minimise a global objective function, which is composed of the sum of a set of local objective functions. Recent researches on this problem have made significant progress by using primal-dual methods. However, the inner link among different algorithms is unclear. This paper shows that some state-of-the-art algorithms differ in that they incorporate the slightly different last dual gradient terms based on the augmented Lagrangian analysis. Then, we propose a distributed Nesterov accelerated optimisation algorithm, where a doubly stochastic matrix is allowed to use, and nonidentical local step-sizes are employed. We analyse the convergence of the proposed algorithm by using the generalised small gain theorem under the assumption that each local objective function is strongly convex and has Lipschitz continuous gradient. We prove that the sequence generated by the proposed algorithm linearly converge to an optimal solution if the largest step-size is positive and less than an explicitly estimated upper bound, and the largest momentum parameter is nonnegative and less than an upper bound determined by the largest step-size. Simulation results further illustrate the efficacy of the proposed algorithm.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.