Abstract

A finite-state Markov chain M can be regarded as a linear transform operating on the set of probability distributions over its node set. The iterative applications of M to an initial probability distribution μ 0 will generate a trajectory of probability distributions. Thus, a set of initial distributions will induce a set of trajectories. It is an interesting and useful task to analyze the dynamics of M as defined by this set of trajectories. The novel idea here is to carry out this task in a symbolic framework. Specifically, we discretize the probability value space [0,1] into a finite set of intervals I = { I 1 , I 2 ,..., I m }. A concrete probability distribution μ over the node set {1, 2,..., n } of M is then symbolically represented as D , a tuple of intervals drawn from I where the i th component of D will be the interval in which μ( i ) falls. The set of discretized distributions D is a finite alphabet. Hence, the trajectory, generated by repeated applications of M to an initial distribution, will induce an infinite string over this alphabet. Given a set of initial distributions, the symbolic dynamics of M will then consist of a language of infinite strings L over the alphabet D . Our main goal is to verify whether L meets a specification given as a linear-time temporal logic formula φ. In our logic, an atomic proposition will assert that the current probability of a node falls in the interval I from I . If L is an ω-regular language, one can hope to solve our model-checking problem (whether L ⊧ φ?) using standard techniques. However, we show that, in general, this is not the case. Consequently, we develop the notion of an ϵ-approximation, based on the transient and long-term behaviors of the Markov chain M . Briefly, the symbolic trajectory ξ' is an ϵ-approximation of the symbolic trajectory ξ iff (1) ξ' agrees with ξ during its transient phase; and (2) both ξ and ξ' are within an ϵ-neighborhood at all times after the transient phase. Our main results are that one can effectively check whether (i) for each infinite word in L , at least one of its ϵ-approximations satisfies the given specification; (ii) for each infinite word in L , all its ϵ-approximations satisfy the specification. These verification results are strong in that they apply to all finite state Markov chains.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call