Abstract

In this study, an adaptive interval type-2 Takagi-Sugeno-Kang fuzzy logic controller based on reinforcement learning (AIT2-TSK-FLC-RL) is proposed. The proposed controller consists of an actor, a critic and a reward signal. The actor is represented by the IT2-TSK-FLC in which the antecedents and the consequents are interval type-2 fuzzy sets (IT2FSs) and type-1 fuzzy sets (T1FSs), respectively, which are named A2-C1. The critic is represented by a neural network, which approximates the optimal guaranteed cost in the control design to ensure the system stability for all admissible uncertainties and noise. The use of a reward signal to formalize the idea of a goal is one of the most distinctive features of RL. Thus, the proposed controller evolves in time as a result of the online learning algorithm. The parameters of the proposed controller are learned online based on the Lyapunov theorem to guarantee the stability, overcome the shortcomings of the gradient descent, such as the local minima and instability, and determine the learning rate of the IT2-TSK-FLC controller. Furthermore, the critic stability is discussed for determining the optimal learning rate. The proposed controller is applied to uncertain nonlinear systems to show its robustness in reducing the effect of system uncertainties and external disturbances and is compared to other controllers.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call