Abstract

In this paper, we study the total reward connected with a Markov reward process from time zero to time m. In particular, we determine the average reward within this time period, as well as its variance. Though the emphasis is on discrete time Markov processes, continuous-time reward processes will also be considered. For the discrete time reward process, the determination of the expected reward form 0 to m is of course trivial. It is of interest, however, that the deviation of this expectation from its steady state equivalent can be obtained from equations which are identical to the equations for the equilibrium probabilities, except that a vector of constants is added. We also consider the variance, both for transient systems and for systems in equilibrium. It is shown that the variance per time unit in equilibrium can also be obtained from equations which differ from the equations for the equilibrium probabilities only by a constant vector. Since there are three different sets of variables which satisfy similar equations, the LU factorization suggests itself. Instead of the LU factorization, we will use a UL factorization which reflects the probabilistic interpretation of the problem. This interpretation allows us to extend the factorization to systems with an infinite number of states as will be demonstrated, using the Wiener-Hopf factorization of the GI/G/1 queue as an example.KeywordsMarkov ChainProbabilistic InterpretationReward ProcessAverage RewardEquilibrium ProbabilityThese keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.