Abstract

We consider the Accelerated Distributed Augmented Lagrangians (ADAL) algorithm, a distributed optimization algorithm that was recently developed by the authors to address problems that involve multiple agents optimizing a separable convex objective function subject to convex local constraints and linear coupling constraints. Optimization using augmented Lagrangians (AL) combines low computational complexity with fast convergence speeds due to the regularization terms included in the AL. However, decentralized methods that employ ALs are few, as decomposition of ALs is a particularly challenging task. ADAL is a primal-dual iterative scheme where at every iteration the agents locally optimize a novel separable approximation of the AL and then appropriately update their primal and dual variables, in a way that ensures convergence to their respective optimal sets. In this paper, we prove that ADAL has a worst-case O(1/k) convergence rate, where k denotes the number of iterations. The convergence rate is established in an ergodic sense, i.e., it refers to the ergodic average of the generated sequences of primal variables up to iteration k.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call