Abstract

This paper develops the application of the alternating directions method of multipliers (ADMM) to optimize a dynamic objective function in a decentralized multiagent system. At each time slot each agent observes a new local objective function and all the agents cooperate to solve the sum objective on the same optimization variable. Specifically, each agent updates its own primal and dual variables and only requires the most recent primal variables from its neighbors. We prove that if each local objective function is strongly convex and has a Lipschitz continuous gradient the primal and the dual variables are close to their optimal values, given that the primal optimal solutions drift slowly enough with time; the closeness is explicitly characterized by the spectral gap of the network, the condition number of the objective function, and the ADMM parameter.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call