Abstract
ABSTRACTThis article investigates the optimal control problem (OCP) for a class of discrete‐time nonlinear systems with state constraints. First, to overcome the challenge caused by the constraints, the original constrained OCP is transformed into an unconstrained OCP by utilizing the system transformation technique. Second, a new cost function is designed to alleviate the effect of system transformation on the optimality of the original system. Further, a novel off‐policy deterministic approximate dynamic programming (ADP) scheme is developed to obtain a near‐optimal solution for the transformed OCP. Compared to existing off‐policy deterministic ADP schemes, the developed scheme relaxes the requirement on the learning data and saves computing resources from the perspective of training neural networks. Third, considering approximation errors, we analyze the convergence and stability of the developed ADP scheme. Finally, the developed ADP with the designed cost function is tested in two numerical cases, and simulation results confirm its effectiveness.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: International Journal of Robust and Nonlinear Control
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.