Abstract
I will survey a body of work, developed over the past decade or so, on algorithms for, and the computational complexity of, analyzing and model checking some important families of countably infinite-state Markov chains, Markov decision processes (MDPs), and stochastic games. These models arise by adding natural forms of recursion, branching, or a counter, to finite-state models, and they correspond to probabilistic/control/game extensions of classic automata-theoretic models like pushdown automata, context-free grammars, and one-counter automata. They subsume some classic stochastic processes such as multi-type branching processes and quasibirth-death processes. They also provide a natural model for probabilistic procedural programs with recursion.Some of the key algorithmic advances for analyzing these models have come from algorithms for computing the least fixed point (and greatest fixed point) solution for corresponding monotone systems of nonlinear (min/max)-polynomial equations. Such equations provide, for example, the Bellman optimality equations for optimal extinction and reachability probabilities for branching MDPs (BMDPs). A key role in these algorithms is played by Newton's method, and by a generalization of Newton's method which is applicable to the Bellman equations for BMDPs, and which uses linear programming in each iteration.By now, polynomial time algorithms have been developed for some of the key problems in this domain, while other problems have been shown to have high complexity, or to even be undecidable. Yet many algorithmic questions about these models remain open. I will highlight some of the open questions.(This talk partly describes joint work with Alistair Stewart and Mihalis Yannakakis.)
Published Version
Join us for a 30 min session where you can share your feedback and ask us any queries you have