Abstract

This paper discusses a new policy iteration Q-learning algorithm to solve the infinite horizon optimal tracking problems for a class of discrete-time nonlinear systems. The idea is to use an iterative adaptive dynamic programming (ADP) technique to construct the iterative tracking control law which makes the system state track the desired state trajectory and simultaneously minimizes the iterative Q function. Via system transformation, the optimal tracking problem is transformed into an optimal regulation problem. The policy iteration Q-learning algorithm is then developed to obtain the optimal control law for the regulation system. Initialized by an arbitrary admissible control law, the convergence property is analyzed. It is shown that the iterative Q function is monotonically non-increasing and converges to the optimal Q function. It is proven that any of the iterative control laws can stabilize the transformed nonlinear system. Two neural networks are used to approximate the iterative Q function and compute the iterative control law, respectively, for facilitating the implementation of policy iteration Q-learning algorithm. Finally, two simulation examples are presented to illustrate the performance of the developed algorithm.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call