Abstract

This paper introduces a novel hierarchical decomposition approach for solving Multiagent Markov Decision Processes (MMDPs) by exploiting coupling relationships in the reward function. MMDP is a natural framework for solving stochastic multi-stage multiagent decision-making problems, such as optimizing mission performance of Unmanned Aerial Vehicles (UAVs) with stochastic health dynamics. However, computing the optimal solutions is often intractable because the state-action spaces scale exponentially with the number of agents. Approximate solution techniques do exist, but they typically rely on extensive domain knowledge. This paper presents the Hierarchically Decomposed MMDP (HD-MMDP) algorithm, which autonomously identifies different degrees of coupling in the reward function and decomposes the MMDP into a hierarchy of smaller MDPs that can be solved separately. Solutions to the smaller MDPs are embedded in an autonomously constructed tree structure to generate an approximate solution to the original problem. Simulation results show HD-MMDP obtains more cumulative reward than that of the existing algorithm for a ten-agent Persistent Search and Track (PST) mission, which is a cooperative multi-UAV mission with more than 10 <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">19</sup> states, stochastic fuel consumption model, and health progression model.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.