Abstract

In this paper, we develop a regularized Fenchel dual gradient method (RFDGM), which allows nodes in a time-varying undirected network to find a common decision, in a fully distributed fashion, for minimizing the sum of their local objective functions subject to their local constraints. Different from most existing distributed optimization algorithms that also cope with time-varying networks, RFDGM is able to handle problems with general convex objective functions and distinct local constraints, and still has non-asymptotic convergence results. Specifically, under a standard network connectivity condition, we show that RFDGM is guaranteed to reach ϵ-accuracy in both optimality and feasibility within iterations. Such iteration complexity can be improved to if the local objective functions are strongly convex but not necessarily differentiable. Finally, simulation results demonstrate the competence of RFDGM in practice.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call