Abstract

Reinforcement learning (RL) is a powerful learning paradigm in which agents can learn to maximize sparse and delayed reward signals. Although RL has had many impressive successes in complex domains, learning can take hours, days, or even years of training data. A major challenge of contemporary RL research is to discover how to learn with less data. Previous work has shown that domain information can be successfully used to shape the reward; by adding additional reward information, the agent can learn with much less data. Furthermore, if the reward is constructed from a potential function, the optimal policy is guaranteed to be unaltered. While such potential-based reward shaping (PBRS) holds promise, it is limited by the need for a well-defined potential function. Ideally, we would like to be able to take arbitrary advice from a human or other agent and improve performance without affecting the optimal policy. The recently introduced dynamic potential-based advice (DPBA) was proposed to tackle this challenge by predicting the potential function values as part of the learning process. However, this article demonstrates theoretically and empirically that, while DPBA can facilitate learning with good advice, it does in fact alter the optimal policy. We further show that when adding the correction term to “fix” DPBA it no longer shows effective shaping with good advice. We then present a simple method called policy invariant explicit shaping (PIES) and show theoretically and empirically that PIES can use arbitrary advice, speed-up learning, and leave the optimal policy unchanged.

Highlights

  • An Reinforcement learning (RL) agent interacts with its surrounding environment by taking actions and receiving rewards and observations in return

  • We introduce a simple algorithm, policy invariant explicit shaping (PIES), and show that PIES can allow for arbitrary advice, is policy invariant, and can accelerate the learning of an RL agent

  • The rest of the paper is outlined as this: first we will provide some background material about RL and reward shaping, we narrow down to dynamic potential-based advice (DPBA) as a specific type of reward shaping and demonstrate some experimental results which are central to our contributions, we study the problem of changing the optimal policy with DPBA in depth, and contrast the findings with the previous section; lastly we introduce PIES, our alternative that satisfies all of the goals we have set, and justify how it can overcome the aforementioned drawbacks of previous methods

Read more

Summary

Introduction

An RL agent interacts with its surrounding environment by taking actions and receiving rewards and observations in return. The authors utilized reward shaping to accelerate their RL agent trying to learn how to ride a bicycle When they provided positive reinforcement for making transitions toward the goal, the agent was misguided to find a loop as the optimal behaviour, accumulating positive rewards over and over:. The rest of the paper is outlined as this: first we will provide some background material about RL and reward shaping, we narrow down to DPBA as a specific type of reward shaping and demonstrate some experimental results (with good advice) which are central to our contributions, we study the problem of changing the optimal policy with DPBA in depth, and contrast the (bad advice) findings with the previous section; lastly we introduce PIES, our alternative that satisfies all of the goals we have set, and justify how it can overcome the aforementioned drawbacks of previous methods

Background
Potential-based reward shaping
Dynamic potential-based reward shaping
Theoretical proof
DPBA can affect the optimal policy
Empirical validation: unhelpful advice
Empirical validation: helpful advice
Policy invariant explicit shaping
4: Initialize S
Related work
Conclusion and discussion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call