Abstract

Deep reinforcement learning (DRL) has made remarkable achievements in artificial intelligence. However, it relies on stochastic exploration that suffers from low efficiency, especially in the early learning stages, of which the time complexity is nearly exponential. To solve the problem, an algorithm, referred to as Generative Action Selection through Probability (GRASP), is proposed to improve exploration in reinforcement learning. The primary insight is to reshape exploration spaces to limit the choice of exploration behaviors. More specifically, GRASP trains a generator to generate the exploration spaces from demonstrations by generative adversarial network (GAN). And then the agent selects actions from new exploration spaces via modified ϵ-greedy algorithm to incorporate GRASP with existing standard deep reinforcement learning algorithms. Experiment results showed that deep reinforcement learning equipped with GRASP demonstrated significant improvements in simulated environments.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.