Abstract
Explicit conditions on the transition probabilities for lumping in a discrete-time Markov chain (DTMC) are well known and were given by Kemeny and Snell in 1960. They distinguish between “strong” lumpability for which the process is lumpable for any initial probability distribution on the states and “weak” lumpability for which the process is lumpable only for some initial probability distributions. This chapter obtains conditions for lumping in a continuous-time Markov reward process. It introduces the notion of “proportional dynamics” and gives necessary and sufficient conditions for it to hold. The chapter shows that proportional dynamics for a given measure is sufficient for weak lumpability for the same measure, it also implies unlumpability. It discusses the measures, such as the transient probabilities, the distribution of accumulated reward, the expected accumulated reward, and the instantaneous reward.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have