Abstract

This paper explores the validity and usefulness of dynamic decision networks (DDNs) in approximating dynamic programming (DP). An approach for comparing the optimal policies of the DDNs and DP was developed and utilised to determine how well DDNs perform under different conditions. Computation times were also compared to determine if the time the DDN saves was worth any inaccuracy obtained. It was found that DDNs are exponentially faster than DP. However, an increase in the values of some of the parameters investigated, such as the number of time slices and objectives, improved the DDN's computational time advantage but reduced its ability to approximate DP optimal policies. A significant finding of this research concerned how close the expected values of the DDN optimal policies were to those of DP in the cases examined. It is shown that in the cases studies, when the DDN's optimal policies disagreed with the DP optimal policies, the expected values of the policies selected by the DDN were always quite close to those of DP. Thus, the DDN appears to be a very useful approximation technique for DP because of its accuracy and efficiency.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.