Abstract

Pang, Bo Jiang, Zhong-PingThis chapter studies the robustnessRobustness of reinforcement learningReinforcement Learning (RL) for discrete-time linear stochastic systemsStochastic system with multiplicative noiseMultiplicative noise evolving in continuous state and action spaces. As one of the popular methods in reinforcement learningReinforcement Learning (RL), the robustnessRobustness of policy iterationPolicy iteration is a longstanding open problem for the stochastic linear quadratic regulator (LQR)Linear Quadratic Regulator (LQR) problem with multiplicative noiseMultiplicative noise. A solution in the spirit of input-to-state stabilityInput-to-state stability is given, guaranteeing that the solutions of the policy iterationPolicy iteration algorithm are bounded and enter a small neighborhood of the optimal solution, whenever the error in each iteration is bounded and small. In addition, a novel off-policyOff-policy multiple-trajectory optimistic least-squaresLeast-squares algorithm policy iterationPolicy iteration algorithm is proposed, to learn a near-optimal solution of the stochastic LQRLinear Quadratic Regulator (LQR) problem directly from online input/state data, without explicitly identifying the system matrices. The efficacy of the proposed algorithm is supported by rigorous convergence analysis and numerical results on a second-order example.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.