Abstract

Abstract : The first part of the report discusses a dynamic programming model in which all rewards obtained by the decision maker are assumed nonnegative. The decision maker's objective is to successively choose actions so as to maximize his expected reward earned over an infinite time span. It follows from known results that the decision maker's choice need only depend upon the outcome of a randomization that depends on the model only through the state of the model and the time when the choice is made. It is shown by counterexample that this is basically the smallest class of decision rules that need be considered. Conditions under which a stationary policy is optimal are also presented. The second part of the report discusses the same model under a new criteria, namely, the average cost incurred per unit time. An example is presented in which there does not exist an epsilon-optimal randomized stationary policy.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.