Abstract

We study optimal distributed first-order optimization algorithms when the network (i.e., communication constraints between the agents) changes with time. This problem is motivated by scenarios where agents experience network malfunctions. We provide a sufficient condition that guarantees a convergence rate with optimal (up lo logarithmic terms) dependencies on the network and function parameters if the network changes are constrained to a small percentage $\alpha$ of the total number of iterations. We call such networks slowly time-varying networks. Moreover, we show that Nesterov's method has an iteration complexity of $\Omega \big( \big(\sqrt{\kappa_\Phi \cdot \bar{\chi}} + \alpha \log(\kappa_\Phi \cdot \bar{\chi})\big) \log(1 / \varepsilon)\big)$ for decentralized algorithms, where $\kappa_\Phi$ is condition number of the objective function, and $\bar\chi$ is a worst case bound on the condition number of the sequence of communication graphs. Additionally, we provide an explicit upper bound on $\alpha$ in terms of the condition number of the objective function and network topologies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call