Abstract

We focus on the problem of minimizing a finite sum f(x) = Σ <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">i</sub> <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">=</sub> <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1</sub> <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">n</sup> f <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">i</sub> (x) of n functions f <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">i</sub> , where f <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">i</sub> are convex and available only locally to an agent i. The n agents are connected in a directed network G (V, E). In this article, we present the Directed-Distributed Alternating Direction Method of Multiplier (D-DistADMM) Algorithm, which is an Alternating Direction Method of Multiplier (ADMM) based scheme and utilizes a finite-time "approximate" consensus method to solve the above optimization problem distributively. At each iteration of the proposed scheme the agents solve their local optimization problem and utilize an approximate consensus protocol to update a local estimate of the global optimization variable. We show that for convex and not-necessarily differentiable objective functions the proposed D-DistADMM method converges at a rate O(1/k), where k is the iteration counter, in terms the difference between the Lagrangian function evaluated at any iteration k of the D-DistADMM algorithm and the optimal solution. We further demonstrate the features of our algorithm by solving a distributed least squares problem.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call