Abstract

The impetus for exploration in reinforcement learning (RL) is decreasing uncertainty about the environment for the purpose of better decision making. As such, exploration plays a crucial role in the efficiency of RL algorithms. In this dissertation, I consider continuous state control problems and introduce a new methodology for representing uncertainty that engenders more efficient algorithms. I argue that the new notion of uncertainty allows for more efficient use of function approximation, which is essential for learning in continuous spaces. In particular, I focus on a class of algorithms referred to as model-based methods and develop several such algorithms that are much more efficient than the current state-of-the-art methods. These algorithms attack the long-standing “curse of dimensionality” – learning complexity often scales exponentially with problem dimensionality. I introduce algorithms that can exploit the dependency structure between state variables to exponentially decrease the sample complexity of learning, both in cases where the dependency structure is provided by the user a priori and cases where the algorithm has to find it on its own. I also use the new uncertainty notion to derive a multi-resolution exploration scheme, and demonstrate how this new technique achieves anytime behavior, which is very important in real-life applications. Finally, using a set of rich experiments, I show how the new exploration mechanisms affect the efficiency of learning, especially in real-life domains where acquiring samples is expensive.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call