Abstract

In this paper, to eliminate the tracking error by using adaptive dynamic programming (ADP) algorithms, a novel formulation of the value function is presented for the optimal tracking problem (TP) of nonlinear discrete-time systems. Unlike existing ADP methods, this formulation introduces the control input into the tracking error, and ignores the quadratic form of the control input directly, which makes the boundedness and convergence of the value function independent of the discount factor. Based on the proposed value function, the optimal control policy can be deduced without considering the reference control input. Value iteration (VI) and policy iteration (PI) methods are applied to prove the optimality of the obtained control policy, and derived the monotonicity property and convergence of the iterative value function. Simulation examples realized with neural networks and the actor–critic structure are provided to verify the effectiveness of the proposed ADP algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call