Abstract

Dynamic programming (DP) is an approach to computing the optimal control policy over time under nonlinearity and uncertainty by employing the principle of optimality introduced by Richard Bellman. Instead of enumerating all possible control sequences, dynamic programming only searches admissible state and/or action values that satisfy the principle of optimality. Therefore, the computation complexity can be much improved over the direct enumeration method. However, the computational efforts and the data storage requirement increase exponentially with the dimensionality of the system, which are reflected in the three curses: the state space, the observation space, and the action space. Thus, the traditional DP approach was limited to solving small size problems. This paper aims at providing an overview of latest development of a class of approximate/adaptive dynamic programming algorithms including those applicable to continuous state and continuous control problems. The paper will especially review direct heuristic dynamic programming (direct (HDP), its design and applications, which include large and complex continuous state and control problems. In addition to the basic principle of direct HDP, the paper includes two application studies of the direct HDP - one is when it is used in a nonlinear tracking problem, and the other is on a power grid coordination control problem based on China southern network.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.