Abstract

In order to deal with optimal setpoint tracking (OST) problems, a discounted cost function has been introduced in the existing work. However, the optimal tracking controllers developed according to the discounted cost function may not ensure asymptotic tracking and the stability of the closed-loop systems. To overcome these limitations, in this article, we propose a novel adaptive optimal control method to minimize a cost function without a discount factor. The proposed method starts from a reformulation of the infinite-horizon OST problem for linear discrete-time systems with external disturbances. We derive an algebraic Riccati equation for solving the OST problem, whose solution is uniquely determined under mild conditions. It is proved that the obtained controller accommodates the disturbance and realizes the output tracking with zero steady-state error. In the framework of reinforcement learning, a <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$Q$</tex-math></inline-formula> -learning algorithm is devised to learn the suboptimal control policy by using measured data. The present learning algorithm does not require that the disturbance is measurable and can be implemented completely model-free. Finally, two examples on dc motor system and F-16 aircraft plant are provided to corroborate our design methodology.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call