Abstract

In this paper, we study the problem of minimizing a sum of convex objective functions, which are locally available to agents in a network. Distributed optimization algorithms make it possible for the agents to cooperatively solve the problem through local computations and communications with neighbors. Lagrangian-based distributed optimization algorithms have received significant attention in recent years, due to their exact convergence property. However, many of these algorithms have slow convergence or are expensive to execute. In this paper, we propose a flexible framework of first-order primal-dual algorithms (FlexPD), which allows for an arbitrary number of primal steps per iteration. This framework includes three algorithms, FlexPD-F, FlexPD-G, and FlexPD-C that can be customized for various applications with different computation and communication requirements. For strongly convex and Lipschitz gradient objective functions, by carefully controlling the stepsize choices, we establish linear convergence of our proposed framework to the optimal solution. Simulation results confirm the superior performance of our framework compared to the existing methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call