Abstract

This chapter focuses on stochastic control and decision processes that occur in a variety of theoretical and applied contexts, such as statistical decision problems, stochastic dynamic programming problems, gambling processes, optimal stopping problems, stochastic adaptive control processes, and so on. It has long been recognized that these are all mathematically closely related. That being the case, all of these decision processes can be viewed as variations on a single theoretical formulation. The chapter presents some general conditions under which optimal policies are guaranteed to exist. The given theoretical formulation is flexible enough to include most variants of the types of processes. In statistical problems, the distribution of the observed variables depends on the true value of the parameter. The parameter space has no topological or other structure here; it is merely a set indexing the possible distributions. Hence, the formulation is not restricted to those problems known in the statistical literature as parametric problems. In nonstatistical contexts, the distribution does not depend on an unknown parameter. All such problems may be included in the formulation by the device of choosing the parameter space to consist of only one point, corresponding to the given distribution.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.