Abstract

SummaryIn this article, an adaptive output‐feedback control for a class of strict‐feedback nonlinear systems is developed based on optimized backstepping technique. Neural networks are utilized to approximate unknown functions, while a state observer is designed to estimate the unmeasurable system state signals. Since the presented optimized control scheme requires training the adaptive parameters for reinforcement learning (RL), it will be more challenging for designing control algorithms and deriving the adaptive update rates. In general, optimization control is designed based on the solution of Hamilton–Jacobi–Bellman equation, but solving the equation is very difficult due to the inherent nonlinearity and intractability. So, RL strategy of actor‐critic architecture is used. According to the Lyapunov stability theory, it is proved that all signals of the closed‐loop systems are semi‐global uniformly ultimately bounded. Finally, the results of the simulation cases are provided to show the effectiveness of the designed controller scheme.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call