Abstract

We revisit the recent Gradient Tracking algorithm for distributed consensus optimization from a control theoretic viewpoint. We show that the algorithm can be constructed by solving a servomechanism control problem stemming from the distributed implementation of a centralized gradient method. Moreover, we show that, if expressed in proper coordinates, the Gradient Tracking embeds an integral action fed by a signal related to the consensus error. Finally, we provide an alternative convergence analysis based on Lyapunov arguments that also proves exponential asymptotic stability of the optimal equilibrium.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call