Abstract

We propose two numerical schemes for approximating quasi-stationary distributions (QSD) of finite state Markov chains with absorbing states. Both schemes are described in terms of certain interacting chains in which the interaction is given in terms of the total time occupation measure of all particles in the system and has the impact of reinforcing transitions, in an appropriate fashion, to states where the collection of particles has spent more time. The schemes can be viewed as combining the key features of the two basic simulation-based methods for approximating QSD originating from the works of Fleming and Viot (1979) and Aldous, Flannery and Palacios (1998), respectively. The key difference between the two schemes studied here is that in the first method one starts witha(n) particles at time 0 and number of particles stays constant over time whereas in the second method we start with one particle and at most one particle is added at each time instant in such a manner that there area(n) particles at timen. We prove almost sure convergence to the unique QSD and establish Central Limit Theorems for the two schemes under the key assumption thata(n) =o(n). Whena(n) ~n, the fluctuation behavior is expected to be non-standard. Some exploratory numerical results are presented to illustrate the performance of the two approximation schemes.

Highlights

  • Markov processes with absorbing states occur frequently in epidemiology [2], statistical physics [33], and population biology [29]

  • Just as stationary distributions of ergodic Markov processes make the law of the Markov process, initialized at that distribution, invariant at all times, quasi-stationary distributions are probability measures that leave the conditional law of the Markov process, on the event of non-absorption, invariant

  • In Theorem 1.4 we prove a.s. convergence of the approximation to the Quasi-stationary distributions (QSD) and in Theorem 1.5 we provide a central limit theorem for this approximation scheme

Read more

Summary

Introduction

Markov processes with absorbing states occur frequently in epidemiology [2], statistical physics [33], and population biology [29]. This can be computationally expensive and numerical experiments (see Section 7) suggest that, with equivalent number of particle moves, a single particle reinforced random walk scheme of Aldous et al performs better than a Fleming-Viot type scheme This trade-off between the exploration of state space through multiple particles and the reinforcement of particle transition probabilities based on the time occupation measure motivates the present work, which studies two algorithms that combine desirable features of both approximation schemes. For the two algorithms studied in the current work, we find that in comparison to the single particle SA schemes studied in [6, 10], one needs to place higher weights on particle states at later time instants in order to suitably counterbalance the variability due to the finite-time behavior of the chains This point is discussed further in Remark 1.6, a precise understanding of relationships between size of SA arrays and time step sizes, for central limit results to hold, remains to be fully developed.

Statement of results
Decomposition and linearization
Notation
Organization
Convergence of Algorithm I
T τn τnp 2
Analysis of the noise terms in Algorithm I
Covariance structure of the error terms
Analysis of the remainder terms
Central Limit Theorem for Algorithm I
Analysis of the discrepancy
Convergence of Algorithm II
Central Limit Theorem for Algorithm II
The linearized evolution sequence
Analysis of the discrepancy sequence
Numerical Experiments
Method
Interacting Scheme and Single Scheme
Interacting Scheme and Independent Scheme
Selecting the Number of Particles in the Interacting Scheme
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call