Abstract

In this paper, we study the distributed optimization problem for a system of agents embedded in time-varying directed communication networks. Each agent has its own cost function and agents cooperate to determine the global decision that minimizes the summation of all individual cost functions. We consider the so-called push-pull gradient-based algorithm (termed as AB/Push-Pull) which employs both row- and column-stochastic weights simultaneously to track the optimal decision and the gradient of the global cost while ensuring consensus and optimality. We show that the algorithm converges linearly to the optimal solution over a time-varying directed network for a constant stepsize when the agent's cost function is smooth and strongly convex. The linear convergence of the method has been shown in [F. Saadatniaki, R. Xin, and U.A. Khan, Decentralized optimization over time-varying directed graphs with row and column-stochastic matrices, IEEE Trans. Autom. Control 65(11) (2020), pp. 4769–4780], where the multi-step consensus contraction parameters for row- and column-stochastic mixing matrices are not directly related to the underlying graph structure, and the explicit range for the stepsize value is not provided. With respect to [F. Saadatniaki, R. Xin, and U.A. Khan, Decentralized optimization over time-varying directed graphs with row and column-stochastic matrices, IEEE Trans. Autom. Control 65(11) (2020), pp. 4769–4780], the novelty of this work is twofold: (1) we establish the one-step consensus contraction for both row- and column-stochastic mixing matrices with the contraction parameters given explicitly in terms of the graph diameter and other graph properties; and (2) we provide explicit upper bounds for the stepsize value in terms of the properties of the cost functions, the mixing matrices, and the graph connectivity structure.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call