Abstract

The augmented Lagrangian method (ALM) is a classical optimization tool that solves a given “difficult” (constrained) problem via finding solutions of a sequence of “easier” (often unconstrained) subproblems with respect to the original (primal) variable, wherein constraints satisfaction is controlled via the so-called dual variables. ALM is highly flexible with respect to how primal subproblems can be solved, giving rise to a plethora of different primal–dual methods. The powerful ALM mechanism has recently proved to be very successful in various large-scale and distributed applications. In addition, several significant advances have appeared, primarily on precise complexity results with respect to computational and communication costs in the presence of inexact updates and design and analysis of novel optimal methods for distributed consensus optimization. We provide a tutorial-style introduction to ALM and its variants for solving convex optimization problems in large-scale and distributed settings. We describe control-theoretic tools for the algorithms’ analysis and design, survey recent results, and provide novel insights into the context of two emerging applications: federated learning and distributed energy trading.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call