Abstract

Publisher Summary This chapter presents sequential and parallel algorithms for many common problems that are known in the area of discrete time finite state Markov chains and Markov decision processes. It analyzes the complexity of these algorithms and, in some cases, demonstrates the intractability of the problems to appreciate the computational difficulty involved in solving these problems. In deriving efficient algorithms for these problems, the graph and matrix representations of Markov processes are used. Both these representations prove to be useful in drawing upon the algorithmic results in the areas of graphs and matrices. Although optimal sequential algorithms for some problems are presented in the chapter, there are many problems especially in Markov decision processes (MDP) for which more efficient algorithms can be developed. In the case of parallel algorithms, many Markov chain problems can be efficiently parallelized, but the speed ups for many of the algorithms are not optimal. Though MDP problems have been shown to be P-complete, there is still a need to parallelize these algorithms as efficiently as possible especially on architectures such as hypercubes. A similar complexity survey is needed for other problems in stochastic processes, such as stochastic games.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.