Abstract

Many existing distributed optimization algorithms are applicable to time-varying networks, whereas their convergence results are established under the standard $B$ -connectivity condition. In this letter, we establish the convergence of the Fenchel dual gradient methods, proposed in our prior work, under a less restrictive and indeed minimal connectivity condition on undirected networks, which, referred to as joint connectivity, requires the infinitely occurring agent interactions to form a connected graph. Compared to the existing distributed optimization algorithms that are guaranteed to converge under joint connectivity, the Fenchel dual gradient methods are able to handle nonlinear local cost functions and nonidentical local constraints. We also demonstrate the effectiveness of the Fenchel dual gradient methods over time-varying networks satisfying joint connectivity via simulations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call