Abstract
This paper investigates the optimal control problem for a class of discrete-time stochastic systems subject to additive and multiplicative noises. An algebraic Riccati equation is established which gives the form of the solution to the problem. To obtain the optimal control gain iteratively, an offline policy iteration is presented with convergence proof. A model-free reinforcement learning algorithm is proposed to learn the optimal admissible control policy using the system states and inputs without resorting to the system matrices. It is proven that the estimation error of the kernel matrix is bounded and the iterative control gain is admissible. Compared with the existing work, this paper considers the model-free controller learning problem for stochastic systems suffering from both additive and multiplicative noises using reinforcement learning. The proposed algorithm is illustrated through a numerical example, which shows that our algorithm outperforms other policy iteration algorithms.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.