Abstract

This paper presents a novel algorithm of Multiagent Reinforcement Learning called State Elimination in Accelerated Multiagent Reinforcement Learning (SEA-MRL), that successfully produces faster learning without incorporating internal knowledge or human intervention such as reward shaping, transfer learning, parameter tuning, and even heuristics, into the learning system. Since the learning speed is determined among others by the size of the state space where the larger the state space the slower learning might become, reducing the state space can lead to faster convergence. SEA-MRL distinguishes insignificant states of the state space from the significant ones and then eliminating them in early learning episodes, which aggressively reduces the scale of the state space in the following learning episodes. Applying SEA-MRL in gridworld multi robot navigation shows 1.62 times faster in achieving learning convergence. This algorithm is generally applicable for other multiagent task challenges or general multiagent learning with large scale state space, and perfectly applicable with no adjustments for single agent learning situation.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.