Abstract

The approximate dynamic programming (ADP) method based on the design and analysis of computer experiments (DACE) approach has been demonstrated as an effective method to solve multistage decision-making problems in the literature. However, this method is still not efficient for infinite-horizon optimization considering the required large volume of sampling in the state space and high-quality value function identification. Therefore, we propose a sequential sampling algorithm and embed it into a DACE-based ADP method to obtain a high-quality value function approximation. Considering the limitations of the traditional stopping criterion (Bellman error bound), we further propose a 45-degree line stopping criterion to terminate value iteration early by identifying an optimally equivalent value function. A comparison of the computational results with those of other three existing policies indicates that the proposed sampling algorithm and stopping criterion can determine a high-quality ADP policy. Finally, we discuss the extrapolation issue of the value function approximated by multivariate adaptive regression splines, the results of which further demonstrate the quality of the ADP policy generated in this study.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.