Abstract

This article reports an algorithm for multi-agent distributed optimization problems with a common decision variable, local linear equality and inequality constraints and set constraints with convergence rate guarantees. \textcolor{black}{The algorithm accrues all the benefits of the Alternating Direction Method of Multipliers (ADMM) approach}. It also overcomes the limitations of existing methods on convex optimization problems with linear inequality, equality and set constraints by allowing directed communication topologies. Moreover, the algorithm can be synthesized distributively. The developed algorithm has: (i) a $O(1/k)$ rate of convergence, where $k$ is the iteration counter, when individual functions are convex but not-necessarily differentiable, and (ii) a geometric rate of convergence to any arbitrary small neighborhood of the optimal solution, when the objective functions are smooth and restricted strongly convex at the optimal solution. The efficacy of the algorithm is evaluated by a comparison with state-of-the-art constrained optimization algorithms in solving a constrained distributed $\ell_1$-regularized logistic regression problem, and unconstrained optimization algorithms in solving a $\ell_1$-regularized Huber loss minimization problem. Additionally, a comparison of the algorithm's performance with other algorithms in the literature that utilize multiple communication steps is provided.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call