Abstract

Sparse interaction in multiagent tasks is an important approach to reduce the exponential computational cost for multiagent reinforcement learning (MARL) systems. How to select proper equilibrium solutions is the key to find the optimal policy and to improve the learning performance when collisions occur. We propose a new MARL algorithm, Efficient Coordination based MARL with Sparse Interactions (ECoSI), using the sparse interaction framework and an efficient coordination mechanism, where equilibrium solutions are selected via Nash equilibrium and Chicken game. ECoSI not only separates the Q-value updating rule in joint states from non-joint states with sparse interactions to achieve lower computation and storage complexity, but also takes advantage of efficient coordination with equilibrium solutions to find the optimal policy. Experimental results demonstrate the effectiveness and robustness of ECoSI compared to other state-of-the-art MARL algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call