Abstract

The world of iterative numerical analysis techniques for continuous time Markov chains (CTMCs) as described by generalized stochastic Petri nets (GSPNs) is split apart: On the one side we find the classical technique performing three main steps 1. state space exploration, 2. elimination of vanishing markings and generation of the stochastic generator matrix Q, and 3. application of an iteration scheme, e.g. Jacobi overrelaxation (JOR). This technique offers a fast iteration if sufficient primary memory is available to store all non zero matrix entries. Due to the well known state space explosion this technique often collapses already in step 1 or 2 without(!) any useful results. On the other side sophisticated iteration techniques based on tensor algebra, cf. [1], allow the application of an iterative scheme without explicitly generating the stochastic generator of the CTMC. S. Donatelli describes in [2] how this technique is employed for CTMCs which are given by superposed generalized stochastic Petri nets (SGSPNs). A SGSPN consists of several GSPNs which are synchronized by a set of synchronizing transitions. The tensor based technique represents Q by a tensor product Q = ⊗ i Q i of small matrices Q i . This representation of Q drastically reduces memory requirements and slightly increases the computational effort for a single iteration step compared to the conventional technique, provided Q can be stored in primary memory. It especially avoids representing zero entries in Q. Nevertheless for a matrix-vector multiplication zero vector entries cause a lot of redundant multiplications as well. Considering zero vector entries the tensor based approach reveals two drawbacks: 1 It regards the cartesian product of the tangible reachability sets (TRS i ) of the isolated GSPNs as the relevant state space. Due to synchronization of transitions this set is usually a superset of the tangible reachability set (TRS) which causes a (model-dependent) overhead, i.e. This overhead can be extremely large. For all unreachable states, vector entries will remain zero during the whole iteration process, hence all multiplications involving these states are a waste of time and their corresponding matrix columns are not relevant. 2 The tensor based technique generates the TRS implicitly by iteration, since the matrix-vector multiplication performed with Q is similar to Breadth-FirstSearch. This requires a rather unfortunate initial distribution, which assigns all probability mass to the initial marking M o of the SGSPN, i.e. P[M 0] = 1.0. Consequently for large state spaces a lot of iteration steps are necessary to distribute probability mass on reachable tangible states. Hence a lot of vector entries remain zero for a certain number of iteration steps and their corresponding matrix columns are irrelevant in these iteration steps.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call