Abstract

Policy gradient reinforcement learning techniques enable an agent to directly learn an optimal action policy through the interactions with the environment. Nevertheless, despite its advantages, it often suffers from sample inefficiency. Inspired by human’s decision making approach, we work toward enhancing its convergence speed by augmenting the agent to memorize and reuse the recently learned policies for selecting action. We apply our method to the trust-region policy optimization (TRPO) and develop faded-experience (FE) TRPO algorithm. To substantiate its effectiveness, we adopt it to learn continuous power control in an interference channel when only noisy location information of devices is known. Results indicate that with FE-TRPO it is possible to improve the learning efficiency compared to TRPO.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call