Abstract

This article introduces a numerical method for finding optimal or approximately optimal decision rules and corresponding expected losses in Bayesian sequential decision problems. The method, based on the classical backward induction method, constructs a grid approximation to the expected loss at each decision time, viewed as a function of certain statistics of the posterior distribution of the parameter of interest. In contrast with most existing techniques, this method has a computation time which is linear in the number of stages in the sequential problem. It can also be applied to problems with insufficient statistics for the parameters of interest. Furthermore, it is well-suited to be implemented using parallel processors.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call