Abstract

This paper describes a method to control the density distribution of a large number of autonomous agents. Our approach leverages from the fact that there are a large number of agents in the system, and hence the time evolution of the probabilistic density distribution of agents can be described as a Markov chain. Once this description is obtained, a Markov chain matrix is synthesized to drive the multi-agent system density to a desired steady-state density distribution, in a probabilistic sense, while satisfying some motion and conflict avoidance constraints. Later, we introduce an adaptive density control method based on real time density feedback to synthesize a time-varying Markov matrix, which leads to better convergence to the desired density distribution. This paper also introduces a decentralized density computation method, which guarantees that all agents will have a best, and common, density estimate in a finite, with an explicit bound, number of communication updates.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.