Abstract

In this article, a novel hybrid reinforcement Q -learning control method is proposed to solve the adaptive fuzzy H∞ control problem of discrete-time nonlinear Markov jump systems based on the Takagi-Sugeno fuzzy model. First, the core problem of adaptive fuzzy H∞ control is converted to solving fuzzy game coupled algebraic Riccati equation, which can hardly be solved by mathematical methods directly. To solve this problem, an offline parallel hybrid learning algorithm is first designed, where system dynamics should be known as a prior. Furthermore, an online parallel Q -learning hybrid learning algorithm is developed. The main characteristics of the proposed online hybrid learning algorithms are threefold: 1) system dynamics are avoided during the learning process; 2) compared with the policy iteration method, the restriction of the initial stable control policy is removed; and 3) compared with the value iteration method, a faster convergence rate can be obtained. Finally, we provide a tunnel diode circuit system model to validate the effectiveness of the present learning algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call