Abstract
This paper investigates finite-horizon optimal control problem of continuous-time uncertain nonlinear systems. The uncertainty here refers to partially unknown system dynamics. Unlike the infinite-horizon, the difficulty of finite-horizon optimal control problem is that the Hamilton–Jacobi–Bellman (HJB) equation is time-varying and must meet certain terminal boundary constraints, which brings greater challenges. At the same time, the partially unknown system dynamics have also caused additional difficulties. The main innovation of this paper is the proposed cyclic fixed-finite-horizon-based reinforcement learning algorithm to approximately solve the time-varying HJB equation. The proposed algorithm mainly consists of two phases: the data collection phase over a fixed-finite-horizon and the parameters update phase. A least-squares method is used to correlate the two phases to obtain the optimal parameters by cyclic. Finally, simulation results are given to verify the effectiveness of the proposed cyclic fixed-finite-horizon-based reinforcement learning algorithm.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.