Abstract

This paper investigates a novel Q-learning algorithm based on action dependent dual heuristic programming (ADDHP) to solve the infinite-time domain linear quadratic tracker (LQT) for unknown linear discrete-time systems. The proposed method is conducted based on only system data without requiring the knowledge of the system matrices. After the reference system is determined, an augmented system composed of the original system and the reference system is constructed, and it is proved that the value function of LQT is quadratic concerning the state of the augmented system. Using the quadratic value function, the augmented algebraic Riccati equation (ARE) is derived to solve the LQT. Due to the difficulty of directly solving the augmented ARE, a Q-learning algorithm based on ADDHP structure is used to solve this problem. With unknown system matrices, a model neural network is developed to reconstruct system dynamics incorporating stability analysis. The estimated system matrices are employed to the proposed algorithm to calculate the optimal control by policy iteration. Moreover, the convergence of the algorithm is proved. Two simulation examples are used to validate the performance of the method, where all results have demonstrated the effectiveness of the proposed ADDHP-based Q-learning method without a priori knowledge of system matrices for LQT.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call