Abstract

It is a challenging problem to make the team of unmanned aerial vehicles perform well to gather up-to-date situational awareness in dynamic environments. To solve the challenge, in this paper, a novel algorithm is presented on the multiagent information gathering problem. First, the physical environment is modeled as an undirected graph, where information obeying the multistate Markov chain is distributed in each vertex. Each agent is allocated in a designated area, where the objective of the team is to gather as much valuable information as possible. Second, this problem is formulated as the factored multiagent partially observable Markov decision process. Furthermore, a scalable centralized online planning algorithm is proposed by iteratively computing patrolling routes for each agent in a greedy fashion. These patrols are bounded optimal in specific conditions. In addition, we evaluate the algorithm on multiagent information gathering scenarios empirically by benchmarking it against some state-of-the-art online planning solvers, i.e., partially observable Monte Carlo planning and factor-value partially observable Monte Carlo planning. Experimental results show that the algorithm typically performs at least 9.04% better than these solvers in the four-agent patrolling problem with coupling degree being three, and can scale up to 100 agents with complex coupling relationships.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.