Abstract

We are interested in understanding stability (almost sure boundedness) of stochastic approximation algorithms (SAs) driven by a “controlled Markov” process. Analyzing this class of algorithms is important, since many reinforcement learning (RL) algorithms can be cast as SAs driven by a “controlled Markov” process. In this paper, we present easily verifiable sufficient conditions for stability and convergence of SAs driven by a “controlled Markov” process. Many RL applications involve continuous state spaces. While our analysis readily ensures stability for such continuous state applications, traditional analyses do not. As compared to literature, our analysis presents a two-fold generalization: 1) the Markov process may evolve in a continuous state space and 2) the process need not be ergodic under any given stationary policy. Temporal difference (TD) learning is an important policy evaluation method in RL. The theory developed herein, is used to analyze generalized $\text{TD}(0)$ , an important variant of TD. Our theory is also used to analyze a TD formulation of supervised learning for forecasting problems.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.