A weight-balanced network plays an important role in the exact convergence of distributed optimization algorithms, but is not always satisfied in practice. Different from most of existing works focusing on designing distributed algorithms, we analyze the convergence of a well-known distributed projected subgradient algorithm over time-varying general graph sequences, i.e., the weight matrices of the network are only required to be row stochastic instead of doubly stochastic. We first show that there may exist a graph sequence such that the algorithm is not convergent when the network switches freely within finitely many graphs. Then to guarantee its convergence under any uniformly jointly strongly connected graph sequence, we provide a necessary and sufficient condition on the cost functions, i.e., the intersection of optimal solution sets to all local optimization problems is not empty. Furthermore, we surprisingly find that the algorithm is convergent for any periodically switching graph sequence, but the converged solution minimizes a weighted sum of local cost functions, where the weights depend on the Perron vectors of some product matrices of the underlying switching graphs. Finally, we consider a slightly broader class of quasi-periodically switching graph sequences, and show that the algorithm is convergent for any quasi-periodic graph sequence if and only if the network switches between only two graphs. This work helps us understand impacts of communication networks on the convergence of distributed algorithms, and complements existing results from a different viewpoint.
Read full abstract