Abstract

This paper presents a combination of both the Takagi–Sugeno–Kang fuzzy controller and the advantage of reinforcement learning for the reduction in the effect of external disturbances and system uncertainties. A neural network is used for the implementation of a critic network; the parameters are updated using the Lyapunov criteria for avoiding local minima problems. The fewer learning parameters used helps online tuning of the controller by the critic network based on the reward function. MATLAB/Simulink is used for simulation of the proposed system under different conditions in the investigation of the performance of the proposed controller. The actor parameters are updated based on the change in the reward function error and a control function. Five different conditions, namely, no-load condition, sudden load-torque changes, system parameter uncertainty, sudden phase interruption, and the impact of noise have been studied. The proposed method was found to meet the high-speed response with the tracking error of ±2.23% for the tracking reference trajectory and tracking error of ±2.5% under constant load-torque disturbance. The test results were compared with two benchmark controllers for the verification of the effectiveness of the proposed controller. Simulation results showed the proposed method providing an adaptive and precise speed response. Therefore, it is suitable for non-linear and uncertain applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call