Abstract
This paper presents a general model for the formulation and solution of the risk‐sensitive dynamic decision problem that maximizes the certain equivalent of the discounted rewards of a time‐varying Markov decision process. The problem is solved by applying the principle of optimality and stochastic dynamic programming to the immediate rewards and the certain equivalent associated with the remaining transitions of a time‐varying Markov process over a finite or infinite time horizon, under the assumptions of constant risk aversion and discounting of future cash flows. The solution provides transient and stationary optimal decision policies that depend on the presence or absence of discounting. The construction equipment replacement problem serves as an example application of the model to illustrate the solution methodology and the sensitivity of the optimal policy to the discount factor and the degree of risk aversion.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Journal of Construction Engineering and Management
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.