AbstractThis article investigates the optimal control problem via reinforcement learning for a class of nonlinear discrete‐time systems. The nonlinear system under consideration is assumed to be partially unknown. A new learning‐based algorithm,T‐step heuristic dynamic programming with eligibility traces (T‐sHDP()), is proposed to tackle the optimal control problem for such partially unknown system. First, the concerned optimal control problem is turned into its equivalence problem, that is, solving a Bellman equation. Then, theT‐sHDP() is utilized to get an approximate solution of Bellman equation, and a rigorous convergence analysis is also conducted as well. Instead of the commonly used single step update approach, theT‐sHDP() stores finite step past returns by introducing a parameter, and then utilizes these knowledge to update the value function (VF) of multiple moments synchronously, so as to achieve higher convergence speed. For implementation ofT‐sHDP(), a neural network‐based actor‐critic architecture is applied to approximate VF and optimal control scheme. Finally, the feasibility of the algorithm is demonstrated by two illustrative simulation examples.
Read full abstract