ABSTRACTIn this article, an optimized inverse dead‐zone control using reinforcement learning (RL) is developed for a class of nonlinear dynamic systems. The dead‐zone is frequently occurred in the nonlinear control system, and it can affect the control performance and even cause the system instable. Hence, it is very requisite to consider the effect of dead‐zone in the design of control strategy. In this proposed optimized inverse dead‐zone control, the basic idea is to find the optimized control as input and the adaptive algorithm to estimate the unknown parameters for the inverse dead‐zone function, so that the available dead‐zone input for system control can be derived. Comparing with traditional methods, on the one hand, the proposed dead zone inverse method is with fewer adaptive parameters, on the other hand, the RL under identifier‐critic‐actor architecture is with the simplified algorithm. Finally, theoretical and simulation results manifest the feasibility of the proposed method.
Read full abstract