Abstract

A fuzzy-model-based approach is developed to investigate the reinforcement learning-based optimization for nonlinear Markov jump singularly perturbed systems. As the first attempt, an offline parallel iteration learning algorithm is presented to solve the coupled algebraic Riccati equations with singular perturbation and jumping parameters. Furthermore, based on the integral reinforcement learning approach, a novel online parallel learning algorithm is proposed by employing the slow and fast sampled data simultaneously, where the impacts of stochastic jumping and ill-conditioned numerical problems are avoided. Meanwhile, the convergence of the proposed learning algorithms is proved. Finally, we present a tunnel diode circuit model to demonstrate the efficacy of the proposed methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call