Abstract

This paper presents an online solution to the infinite-horizon linear quadratic tracker (LQT) using reinforcement learning. It is first assumed that the value function for the LQT is quadratic in terms of the reference trajectory and the state of the system. Then, using the quadratic form of the value function, an augmented algebraic Riccati equation (ARE) is derived to solve the LQT. Using this formulation, both feedback and feedforward parts of the optimal control solution are obtained simultaneously by solving the augmented ARE. To find the solution to the augmented ARE online, policy iteration as a class of reinforcement learning algorithms, is employed. This algorithm is implemented on an actor-critic structure by using two neural networks and it does not need the knowledge of the drift system dynamics or the command generator dynamics. A simulation example shows that the proposed algorithm works for a system with partially unknown dynamics.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call