Abstract

For industrial processes with external disturbance and actuator failure, off-policy reinforcement learning-based novel model-free minmax fault-tolerant control is proposed in this paper to solve H∞ fault-tolerant tracking control problem. An augmented model equivalent to the original system is constructed, and the state of the new augmented model is composed of state increment and tracking error of the original system. The original H∞ fault-tolerant tracking problem was transformed into the linear quadratic zero-sum game problem by establishing performance index function, and the Game Algebraic Riccati Equation (GARE) was established. Then Q function was introduced and the Off-policy reinforcement learning algorithm was designed. Different from the traditional model-based fault-tolerant control method, the proposed algorithm does not need the knowledge of system dynamics, and it can learn from the measured data of the system trajectory to solve the GARE. In addition, it is proved that the probing noise added to satisfy the persistent excitation condition does not cause bias. A simulation example of injection molding process is used to verify the effectiveness of the proposed algorithm.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.