Abstract

SUMMARY In this paper, the finite-horizon near optimal adaptive regulation of linear discrete-time systems with unknown system dynamics is presented in a forward-in-time manner by using adaptive dynamic programming and Q-learning. An adaptive estimator (AE) is introduced to relax the requirement of system dynamics, and it is tuned by using Q-learning. The time-varying solution to the Bellman equation in adaptive dynamic programming is handled by utilizing a time-dependent basis function, while the terminal constraint is incorporated as part of the update law of the AE. The Kalman gain is obtained by using the AE parameters, while the control input is calculated by using AE and the system state vector. Next, to relax the need for state availability, an adaptive observer is proposed so that the linear quadratic regulator design uses the reconstructed states and outputs. For the time-invariant linear discrete-time systems, the closed-loop dynamics becomes non-autonomous and involved but verified by using standard Lyapunov and geometric sequence theory. Effectiveness of the proposed approach is verified by using simulation results. The proposed linear quadratic regulator design for the uncertain linear system requires an initial admissible control input and yields a forward-in-time and online solution without needing value and/or policy iterations. Copyright © 2014 John Wiley & Sons, Ltd.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.