Abstract

Abstract Reinforcement learning (RL) can be used to obtain an approximate numerical solution to the Hamilton-Jacobi-Bellman (HJB) equation. Recent advances in machine learning community enable the use of deep neural networks (DNNs) to approximate high-dimensional nonlinear functions as those that occur in RL, accurately without any domain knowledge. In the standard RL setting, both system and cost structures are unknown, and the amount of data needed to obtain an accurate approximation can be impractically large. Meanwhile, when the structures are known, they can be used to solve the HJB equation efficiently. Herein, the model-based globalized dual heuristic programming (GDHP) is proposed, in which the HJB equation is separated into value, costate, and policy functions. A particular class of interest in this research is finite horizon optimal tracking control (FHOC) problem. Additional issues that arise, such as time-varying functions, terminal constraints, and delta-input formulation, are addressed in the context of FHOC. The DNN structure and training algorithm suitable for FHOC are presented. A benchmark continuous reactor example is provided to illustrate the proposed approach.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.