Abstract

In this work, for a control consumption-investment process with the discounted reward optimization criteria, a numerical estimate of the stability index is made. Using explicit formulas for the optimal stationary policies and for the value functions, the stability index is explicitly calculated and through statistical techniques its asymptotic behavior is investigated (using numerical experiments) when the discount coefficient approaches 1. The results obtained define the conditions under which an approximate optimal stationary policy can be used to control the original process.

Highlights

  • IntroductionIn a standard way (see [1] [2] for definitions), let M be a Markov control process at discrete time with infinite horizon ( called Markov decision processes) and let M be its approximation

  • In a standard way, let M be a Markov control process at discrete time with infinite horizon and let M be its approximation

  • The reward function used in this work, see Equation (16), is a very used function in economics, it belongs to the family of consumer utility functions, the so-called Cobb-Douglas utility function, so the selection of the parameter p in Equation (16) it must be very careful

Read more

Summary

Introduction

In a standard way (see [1] [2] for definitions), let M be a Markov control process at discrete time with infinite horizon ( called Markov decision processes) and let M be its approximation. If for some reasons (some of these causes are discussed later), it is not possible to find an optimal policy for the original process M, we could use the policy ( f ∗ ) to control the original process M The importance of this stability index is that it allows us to calibrate the use of ( f ∗ ) to control the original process M. If this stability index is very high ( Δ → ∞ ), it would imply that it is not optimal to use the optimal policy ( f ∗ ) to control the process M, on the other hand if this stability index is low ( Δ → 0 ), the use of this approximation is valid

Objectives
Methods
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call