Abstract

This chapter deals with Markov processes. It first defines the “Markov property” and shows that all the relevant information about a Markov process assuming values in a finite set of cardinality n can be captured by a nonnegative n x n matrix known as the state transition matrix, and an n-dimensional probability distribution of the initial state. It then invokes the results of the previous chapter on nonnegative matrices to analyze the temporal evolution of Markov processes. It also estimates the state transition matrix and considers the dynamics of stationary Markov chains, recurrent and transient states, hitting probability and mean hitting times, and the ergodicity of Markov chains.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call