Abstract
This paper presents an adaptive fault-tolerant control (FTC) system based on reinforcement learning using an even-triggered mechanism. The even-triggered mechanism is established through a justifiable sliding surface and triggered function, without the need for any fault detection or observer. The learning laws are derived to ensure the convergence of internal signals and tracking error, and an actor–critic architecture is designed accordingly. To validate the proposed scheme, an experimental system is constructed and tested using five typical actuator faults. The results indicate a positive closed-loop performance and a reduction of approximately 25% in data transmission for both cases with and without faults.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.