Abstract

In the family of Learning Classifier Systems, the classifier system XCS is most widely used and investigated. However, the standard XCS has difficulties solving large multi-step problems, where long action chains are needed to get delayed rewards. Up to the present, the reinforcement learning technique in XCS has been based on Q-learning, which optimizes the discounted total reward received by an agent but tends to limit the length of action chains. However, there are some undiscounted reinforcement learning methods available, such as R-learning and average reward reinforcement learning in general, which optimize the average reward per time step. In this paper, R-learning is used as the reinforcement learning employed by XCS, to replace Q-learning. The modification results in a classifier system that is rapid and able to solve large maze problems. In addition, it produces uniformly spaced payoff levels, which can support long action chains and thus effectively prevent the occurrence of overgeneralization.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call