Abstract

Solving optimization problems in multi-agent networks where each agent only has partial knowledge of the problem has become an increasingly important problem. In this paper, we consider the problem of minimizing the sum of $n$ convex functions. We assume that each function is only known by one agent. We show that generalized distributed alternating direction method of multipliers (ADMM) converges Q-linearly to the solution of the mentioned optimization problem if the overall objective function is strongly convex but the functions known by each agent are allowed to be only convex. Establishing Q-linear convergence allows for tracking statements that cannot be made if only R-linear convergence is guaranteed. In other words, in scenarios in which the objective functions are time-varying at the same scale as the algorithm is updated R-linear convergence is typically insufficient. Further, we establish the equivalence between generalized distributed ADMM and proximal exact first-order algorithm (P-EXTRA) for a sub-set of mixing matrices. This equivalence yields insights in the convergence of P-EXTRA when overshooting to accelerate convergence.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call