In this paper, an adaptive Takagi–Sugeno (T–S) fuzzy controller based on reinforcement learning for controlling the nonlinear dynamical systems is proposed. The parameters of the T–S fuzzy system are learned using the reinforcement learning based on the actor-critic method. This on-line learning algorithm improves the controller performance over the time, which it learns from its own faults through the reinforcement signal from the external environment and tries to reinforce the T–S fuzzy system parameters to converge. The updating parameters are developed using the Lyapunov stability criterion. The proposed controller is faster in learning than the T–S fuzzy that parameters learned using the gradient descent method under the same conditions. Moreover, it is able to handle the load changes and the system uncertainties. The test is carried out based on two mathematical models. In addition, the proposed controller is applied practically for controlling a direct current (DC) shunt machine. The results indicate that the response of the proposed controller has a good performance compared with other controllers.