Abstract

This paper proposes a novel structure of a recurrent interval type-2 TSK fuzzy neural network (RIT2-TSK-FNN) controller based on a reinforcement learning scheme for improving the performance of nonlinear systems using a less number of rules. The parameters of the proposed RIT2-TSK-FNN controller are leaned online using the reinforcement actor–critic method. The controller performance is improved over the time as a result of the online learning algorithm. The controller learns from its own mistakes and faults through the reward and punishment signal from the external environment and seeks to reinforce the RIT2-TSK-FNN controller parameters to converge. In order to obtain less number of rules, the structure learning is performed and thus the RIT2-TSK-FNN rules are obtained online based on the type-2 fuzzy clustering. The online adaptation of the proposed RIT2-TSK-FNN controller parameters is developed using the Levenberg–Marquardt method with adaptive learning rates. The stability analysis is discussed using the Lyapunov theorem. The obtained results show that the proposed RIT2-TSK-FNN controller using the reinforcement actor–critic technique is more preferable than the RIT2-TSK-FNN controller without the actor–critic method under the same conditions. The proposed controller is applied to a nonlinear mathematical system and an industrial process such as a heat exchanger to clarify the robustness of the proposed structure.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.