Abstract

Reinforcement learning consists of finding policies that maximize an expected cumulative long-term reward in a Markov decision process with unknown transition probabilities and instantaneous rewards. In this article, we consider the problem of finding such optimal policies while assuming they are continuous functions belonging to a reproducing kernel Hilbert space (RKHS). To learn the optimal policy, we introduce a stochastic policy gradient ascent algorithm with the following three unique novel features. First, the stochastic estimates of policy gradients are unbiased. Second, the variance of stochastic gradients is reduced by drawing on ideas from numerical differentiation. Four, policy complexity is controlled using sparse RKHS representations. Novel feature, first, is instrumental in proving convergence to a stationary point of the expected cumulative reward. Novel feature, second, facilitates reasonable convergence times. Novel feature, third, is a necessity in practical implementations, which we show can be done in a way that does not eliminate convergence guarantees. Numerical examples in standard problems illustrate successful learning of policies with low complexity representations, which are close to stationary points of the expected cumulative reward.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call