Abstract

This paper considers the distributed optimization problem of minimizing a global cost function formed by a sum of local smooth cost functions by using local information exchange. A standard assumption for proving exponential/linear convergence of existing distributed first-order methods is strong convexity of the cost functions. This does not hold for many practical applications. In this paper, we propose a continuous-time distributed primal-dual gradient descent algorithm and show that it converges exponentially to a global minimizer under the assumption that the global cost function satisfies the restricted secant inequality condition. This condition is weaker than strong convexity and the global minimizer is not necessarily unique. Moreover, a discrete-time distributed primal-dual algorithm is developed from the continuous-time algorithm by Euler’s approximation method, which also linearly converges to a global minimizer under the same condition. The theoretical results are illustrated by numerical simulations.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.