Abstract

This article explores a novel adaptive optimal control strategy for a class of sophisticated discrete-time nonlinear Markov jump systems (DTNMJSs) via Takagi–Sugeno fuzzy models and reinforcement learning (RL) techniques. First, the original nonlinear system model is represented by fuzzy approximation, while the relevant optimal control problem is equivalent to designing fuzzy controllers for linear fuzzy systems with Markov jumping parameters. Subsequently, we derive the fuzzy coupled algebraic Riccati equations for the fuzzy-based discrete-time linear Markov jump systems by using Hamiltonian–Bellman methods. Following this, an online fuzzy optimization algorithm for DTNMJSs as well as the associated equivalence proof is given. Then, a fully model-free off-policy fuzzy RL algorithm is derived with proved convergence for the DTNMJSs without using the information of system dynamics and transition probability. Finally, two simulation examples, respectively, related to the single-link robotic arm and the half-car active suspension are given to verify the effectiveness and good performance of the proposed approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call