Abstract
Reinforcement learning consists of finding policies that maximize an expected cumulative long-term reward in a Markov decision process with unknown transition probabilities and instantaneous rewards. In this article, we consider the problem of finding such optimal policies while assuming they are continuous functions belonging to a reproducing kernel Hilbert space (RKHS). To learn the optimal policy, we introduce a stochastic policy gradient ascent algorithm with the following three unique novel features. First, the stochastic estimates of policy gradients are unbiased. Second, the variance of stochastic gradients is reduced by drawing on ideas from numerical differentiation. Four, policy complexity is controlled using sparse RKHS representations. Novel feature, first, is instrumental in proving convergence to a stationary point of the expected cumulative reward. Novel feature, second, facilitates reasonable convergence times. Novel feature, third, is a necessity in practical implementations, which we show can be done in a way that does not eliminate convergence guarantees. Numerical examples in standard problems illustrate successful learning of policies with low complexity representations, which are close to stationary points of the expected cumulative reward.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.