Abstract
In this article, a fuzzy logic system (FLS)-based adaptive optimized backstepping control is developed by employing reinforcement learning (RL) strategy for a class of nonlinear strict feedback systems with unmeasured states. For making the virtual and actual controls are optimized solution of the corresponding subsystem, RL of observer-critic-actor architecture based on FLS approximations is constructed in every backstepping step, where the observer aims to estimate the unmeasurable states, and the critic and actor aim to evaluate control performance and perform control behavior, respectively. In the proposed optimized control, on the one hand, the state observer method can avoid to require the design constants making its characteristic polynomial Hurwitz, which is universally demanded in the existing observer methods; on the other hand, the RL is significantly simple in algorithm, because the critic and actor training laws are derived from the negative gradient of a simple positive function, which is produced from the partial derivative of Hamilton–Jacobi–Bellman (HJB) equation, instead of the square of approximated HJB equation. Therefore this optimized scheme can be more easily applied and widely extended. Finally, from two aspects of theory and simulation, it is demonstrated that the desired objective can be fulfilled.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.