An optimized control method is developed for a class of high-order nonlinear dynamic systems having controller dead-zone phenomenon. Dead-zone refers to the controller with zero behavior within a certain range, so it will inevitably affect system performance. In order to make the optimized control eliminate the effect of dead zone, the adaptive dead-zone inverse and reinforcement learning (RL) techniques are combined. The main idea is to find the desired optimized control using RL as the input of dead-zone inverse function and then to design the adaptive algorithm to estimate the unknown parameters of dead-zone inverse function, so that the competent system control can be yielded from the dead-zone function. However, most existing RL algorithms are difficult to apply in the dead zone inverse methods because of the algorithm complexity. The proposed RL greatly simplifies the algorithm because it derives the training rules from the negative gradient of a simple positive function yielded by the partial derivative of Hamilton–Jacobi-Bellman (HJB) equation. Meanwhile, the proposed dead-zone inverse method also requires fewer adaptive parameters. Finally, the proposed control is attested through theoretical proofs and simulation examples.
Read full abstract