Abstract

With the development of modern mechatronics and networked systems, the controller design of time-delay systems has received notable attention. Time delays can greatly influence the stability and performance of the systems, especially for optimal control design. In this article, we propose a receding horizon actor–critic learning control approach for near-optimal control of nonlinear time-delay systems (RACL-TD) with unknown dynamics. In the proposed approach, a data-driven predictor for nonlinear time-delay systems is first learned based on the Koopman theory using precollected samples. Then, a receding horizon actor–critic architecture is designed to learn a near-optimal control policy. In RACL-TD, the terminal cost is determined by using the Lyapunov–Krasovskii approach so that the influences of the delayed states and control inputs can be well addressed. Furthermore, a relaxed terminal condition is present to reduce the computational cost. The convergence and optimality of RACL-TD in each prediction interval as well as the closed-loop property of the system are discussed and analyzed. Simulation results on a two-stage time-delayed chemical reactor illustrate that RACL-TD can achieve better control performance than nonlinear model predictive control (MPC) and infinite-horizon adaptive dynamic programming. Moreover, RACL-TD can have less computational cost than nonlinear MPC.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call