Abstract

In this paper, a Q-learning algorithm is proposed to solve the linear quadratic regulator problem of black box linear systems. The algorithm only has access to input and output measurements. A Luenberger observer parametrization is constructed using the control input and a new output obtained from a factorization of the utility function. An integral reinforcement learning approach is used to develop the Q-learning approximator structure. A gradient descent update rule is used to estimate on-line the parameters of the Q-function. Stability and convergence of the Q-learning algorithm under the Luenberger observer parametrization is assessed using Lyapunov stability theory. Simulation studies are carried out to verify the proposed approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call