Abstract

The exploration–exploitation dilemma has been an unresolved issue within the framework of multi-agent reinforcement learning. The agents have to explore in order to improve the state which potentially yields higher rewards in the future or exploit the state that yields the highest reward based on the existing knowledge. Pure exploration degrades the agent’s learning but increases the flexibility of the agent to adapt in a dynamic environment. On the other hand pure exploitation drives the agent’s learning process to locally optimal solutions. Various learning policies have been studied to address this issue. This paper presents critical experimental results on a number of learning policies reported in the open literatures. Learning policies namely greedy, ξ-greedy, Boltzmann Distribution (BD), Simulated Annealing (SA), Probability Matching (PM) and Optimistic Initial Values (OIV) are implemented to study on their performances on a multi-agent foraging-task modelled. Based on the numerical results that were obtained, the performances of the learning policies are discussed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call