Abstract
One of the conditions for the convergence of Q-Learning is to visit each state-action pair infinitely (or at least sufficiently) often. This requirement raises problems for large or continuous state spaces. Particularly, in continuous state spaces a discretization sufficiently fine to cover all relevant information usually results in an extremely large state space. In order to speed up and improve learning it is highly beneficial to add generalization to Q-Learning and thus being able to exploit experiences gained earlier. To achieve this, we compute a state space abstraction with a combination of growing neural gas and Q-Learning. This abstraction respects similarity in the state and action space and is constructed based on information achieved from interaction with the environment during learning. We examine the proposed algorithm on a continuous-state reinforcement learning problem and show that the approximated state space and the generalization speed up learning.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.