Abstract

Reinforcement learning has great potential in solving practical problems, but when combining it with neural networks to solve small scale discrete space problems, it may easily trap in a local minimum value. Traditional reinforcement learning utilizes continuous updating of a single agent to learn policies, which easily leads to a slow convergence speed. In order to solve the above problems, we combine asynchronous methods with existing tabular reinforcement learning algorithms, propose a parallel architecture to solve the discrete space path planning problem, and present some new variants of asynchronous reinforcement learning algorithms. We apply these algorithms on the standard reinforcement learning environment problems, and the experimental results show that these methods can solve discrete space path planning problems efficiently. One of these algorithms, Asynchronous Phased Dyna-Q, which surpasses existing asynchronous reinforcement learning algorithms, can well balance exploration and exploitation.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.