Abstract
In this paper, we consider a distributed convex optimization problem of a multi-agent system with the global objective function as the sum of agents’ individual objective functions. To solve such an optimization problem, we propose a distributed stochastic sub-gradient algorithm with random sleep scheme. In the random sleep scheme, each agent independently and randomly decides whether to inquire the sub-gradient information of its local objective function at each iteration. The algorithm not only generalizes distributed algorithms with variable working nodes and multi-step consensus-based algorithms, but also extends some existing randomized convex set intersection results. We investigate the algorithm convergence properties under two types of stepsizes: the randomized diminishing stepsize that is heterogeneous and calculated by individual agent, and the fixed stepsize that is homogeneous. Then we prove that the estimates of the agents reach consensus almost surely and in mean, and the consensus point is the optimal solution with probability 1, both under randomized stepsize. Moreover, we analyze the algorithm error bound under fixed homogeneous stepsize, and also show how the errors depend on the fixed stepsize and update rates.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.