Abstract

Reinforcement learning has great potential in solving practical problems, but when combining it with neural networks to solve small scale discrete space problems, it may easily trap in a local minimum value. Traditional reinforcement learning utilizes continuous updating of a single agent to learn policies, which easily leads to a slow convergence speed. In order to solve the above problems, we combine asynchronous methods with existing tabular reinforcement learning algorithms, propose a parallel architecture to solve the discrete space path planning problem, and present some new variants of asynchronous reinforcement learning algorithms. We apply these algorithms on the standard reinforcement learning environment problems, and the experimental results show that these methods can solve discrete space path planning problems efficiently. One of these algorithms, Asynchronous Phased Dyna-Q, which surpasses existing asynchronous reinforcement learning algorithms, can well balance exploration and exploitation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call