Abstract

In this article, we consider distributed nonconvex optimization over an undirected connected network. Each agent can only access to its own local nonconvex cost function and all agents collaborate to minimize the sum of these functions by using local information exchange. We first propose a modified alternating direction method of multipliers (ADMM) algorithm. We show that the proposed algorithm converges to a stationary point with the sublinear rate <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathcal {O}(1/T)$</tex-math></inline-formula> if each local cost function is smooth and the algorithm parameters are chosen appropriately. We also show that the proposed algorithm linearly converges to a global optimum under an additional condition that the global cost function satisfies the Polyak–Łojasiewicz condition, which is weaker than the commonly used conditions for showing linear convergence rates including strong convexity. We then propose a distributed linearized ADMM (L-ADMM) algorithm, derived from the modified ADMM algorithm, by linearizing the local cost function at each iteration. We show that the L-ADMM algorithm has the same convergence properties as the modified ADMM algorithm under the same conditions. Numerical simulations are included to verify the correctness and efficiency of the proposed algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call