SummaryIn this paper, the adaptive optimal regulator design for unknown quantized linear discrete‐time control systems over fixed finite time is introduced. First, to mitigate the quantization error from input and state quantization, dynamic quantizer with time‐varying step‐size is utilized wherein it is shown that the quantization error will decrease overtime thus overcoming the drawback of the traditional uniform quantizer. Next, to relax the knowledge of system dynamics and achieve optimality, the adaptive dynamic programming methodology is adopted under Bellman's principle by using quantized state and input vector. Because of the time‐dependency nature of finite horizon, an adaptive online estimator, which learns a newly defined time‐varying action‐dependent value function, is updated at each time step so that policy and/or value iterations are not needed. Further, an additional error term corresponding to the terminal constraint is defined and minimized along the system trajectory. The proposed design scheme yields a forward‐in‐time and online scheme, which enjoys great practical merits. Lyapunov analysis is used to show the boundedness of the closed‐loop system; whereas when the time horizon is stretched to infinity as in the case of infinite horizon, asymptotic stability of the closed‐loop system is demonstrated. Simulation results on a benchmarking batch reactor system are included to verify the theoretical claims. The net result is the design of the optimal adaptive controller for uncertain quantized linear discrete‐time systems in a forward‐in‐time manner. Copyright © 2014 John Wiley & Sons, Ltd.
Read full abstract