Although numerous approximate dynamic programming (ADP) methods have been proposed and applied to robust control problems, research on the use of ADP methods for robust tracking control in nonlinear systems remains limited, especially for discrete time (DT) nonlinear systems. In this paper, we propose an efficient robust learning-based output tracking control method, referred to as r-LTC, for a class of unknown DT nonlinear systems with dynamic uncertainty. The objectives are achieved by: 1) using a system identification technique based on network-type coefficients Auto-Regressive with eXogenous variables (ARX) model to identify completely unknown systems; 2) reformulating the identified model into a state space model in terms of deviations from the reference trajectory, such that the robust tracking problem can be converted into a robust regulation problem; 3) transforming the robust regulation problem into an equivalent optimal regulation problem for a nominal system with a modified utility function; 4) alternately implementing the performance evaluation and control policy update through two neural networks (NNs) to numerically solve DT Hamilton-Jacobi-Bellman (HJB) equation, such that the approximated optimal feedback control inputs for nominal system can be obtained, allowing for realization of all unknown bounded uncertainties. The proposed method does not require a state observer and any steady state knowledge, only the measurable system input and output data are utilized. The convergence of NNs’ approximate weights and stability of the closed-loop uncertain systems are rigorously proven using the Lyapunov method. Lastly, a case study is performed using real data collected from the Shenzhen Metro Line 12 tunneling project to examine the performance of the proposed r-LTC algorithm in terms of the set value tracking accuracy, control system stability, and robustness against different types of disturbances.
Read full abstract