Abstract

This paper develops the application of the alternating direction method of multipliers (ADMM) to optimize a dynamic objective function in a decentralized multi-agent system. At each time slot, agents in the network observe local functions and cooperate to track the optimal time-varying argument of the sum objective. This cooperation is based on maintaining local primal variables that estimate the value of the optimal argument and auxiliary dual variables that encourage proximity with neighboring estimates. Primal and dual variables are updated by an ADMM iteration that can be implemented in a distributed manner whereby local updates require access to local variables and the most recent primal variables from adjacent agents. For objective functions that are strongly convex and have Lipschitz continuous gradients, the distances between the primal and dual iterates to their corresponding time-varying optimal values are shown to converge to a steady state gap. This gap is explicitly characterized in terms of the condition number of the objective function, the condition number of the network that is defined as the ratio between the largest and smallest nonzero Laplacian eigenvalues, and a bound on the drifts of the optimal primal variables and the optimal gradients. Numerical experiments corroborate theoretical findings and show that the results also hold for non-differentiable and non-strongly convex primal objectives.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.