Abstract
In this paper, we consider a distributed convex optimization problem of a multi-agent system with the global objective function as the sum of agents’ individual objective functions. To solve such an optimization problem, we propose a distributed stochastic sub-gradient algorithm with random sleep scheme. In the random sleep scheme, each agent independently and randomly decides whether to inquire the sub-gradient information of its local objective function at each iteration. The algorithm not only generalizes distributed algorithms with variable working nodes and multi-step consensus-based algorithms, but also extends some existing randomized convex set intersection results. We investigate the algorithm convergence properties under two types of stepsizes: the randomized diminishing stepsize that is heterogeneous and calculated by individual agent, and the fixed stepsize that is homogeneous. Then we prove that the estimates of the agents reach consensus almost surely and in mean, and the consensus point is the optimal solution with probability 1, both under randomized stepsize. Moreover, we analyze the algorithm error bound under fixed homogeneous stepsize, and also show how the errors depend on the fixed stepsize and update rates.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have