Abstract
Multi-agent reinforcement learning (MARL) under partial observability is notoriously challenging as the agents only have asymmetric partial observations of the system. In this paper, we study MARL in decentralized partially observable Markov decision processes (Dec-POMDPs) with partial history sharing. In search of decentralized and tractable MARL solutions, we identify the appropriate conditions under which we can adopt the common information approach to naturally extend existing single-agent policy learners to Dec-POMDPs. In particular, under the conditions of bounded local memories and an efficient representation of the common information, we present a MARL algorithm that learns a near-optimal finite-memory policy in Dec-POMDPs. We establish the iteration complexity of the algorithm, which depends only linearly on the number of agents. Simulations on classic Dec-POMDP tasks show that our approach significantly outperforms existing decentralized solutions, and nearly matches the centralized ones that require stronger informational assumptions.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.