Abstract

For industrial processes with external disturbance and actuator failure, off-policy reinforcement learning-based novel model-free minmax fault-tolerant control is proposed in this paper to solve H∞ fault-tolerant tracking control problem. An augmented model equivalent to the original system is constructed, and the state of the new augmented model is composed of state increment and tracking error of the original system. The original H∞ fault-tolerant tracking problem was transformed into the linear quadratic zero-sum game problem by establishing performance index function, and the Game Algebraic Riccati Equation (GARE) was established. Then Q function was introduced and the Off-policy reinforcement learning algorithm was designed. Different from the traditional model-based fault-tolerant control method, the proposed algorithm does not need the knowledge of system dynamics, and it can learn from the measured data of the system trajectory to solve the GARE. In addition, it is proved that the probing noise added to satisfy the persistent excitation condition does not cause bias. A simulation example of injection molding process is used to verify the effectiveness of the proposed algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call