Abstract

Markov chains are of great use in the study of inventory control, preventive maintenance, replacement problem, reliability theory and many other problems. The new type decision process, which is a marriage of Markov chain theory and mathematical programming, is a Murkoviun decision process. This decision process was first introduced in 1957 by Bellman [l]. Howard [2] has developed this process by the dynamic programming approach. Furthermore, Blackwell [3] has discussed this method. On the other hand, Manne [4] formulated this problem by the linear programming approach. WolfeDantzig [5], Derman [6] and many authors have developed this method. In former time this decision process was based on a discrete time Markov chain, or continuous time Markov chain. Both discrete and continuous time Markov chains are generalized to semi-Markov processes. In this paper we shall treat the decision process based on a semi-Markov process. This decision process is called a semi-Markovian decision process. Jewel1 [7], De Cani [8] and Howard [9] have developed semi-Markovian decision processes by using dynamic programming. We shall show that these problems occured in the classification of the processes are directly formulated by linear programming problems. Moreover, we shall show that these problems with discounting are similarly formulated by linear programming problems, We shall also discuss the relation between linear programming and dynamic programming in this process. In Section 2 we define semi-Markov processes and discuss the classification and limiting behavior of the processes. In Section 3 we introduce the concept of returns associated with semi-Markov processes and obtain the average return per unit time that the system will get in the steady state, or the total expected return before absorption according to the classification of the proc-

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call