Abstract

This paper investigates the problem of distributed stochastic approximation in multi-agent systems. The algorithm under study consists of two steps: a local stochastic approximation step and a gossip step which drives the network to a consensus. The gossip step uses row-stochastic matrices to weight network exchanges. We first prove the convergence of a distributed optimization algorithm, when the function to optimize may not be convex and the communication protocol is independent of the observations. In that case, we prove that the average estimate converges to a consensus; we also show that the set of limit points is not necessarily the set of the critical points of the function to optimize and is affected by the Perron eigenvector of the mean-matrix describing the communication protocol. Discussion about the success or failure of convergence to the minimizers of the function to optimize is also addressed. In a second part of the paper, we extend the convergence results to the more general context of distributed stochastic approximation.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.