Abstract

We consider continuous time Markovian processes where populations of individual agents interact stochastically according to kinetic rules. Despite the increasing prominence of such models in fields ranging from biology to smart cities, Bayesian inference for such systems remains challenging, as these are continuous time, discrete state systems with potentially infinite state-space. Here we propose a novel efficient algorithm for joint state/parameter posterior sampling in population Markov Jump processes. We introduce a class of pseudo-marginal sampling algorithms based on a random truncation method which enables a principled treatment of infinite state spaces. Extensive evaluation on a number of benchmark models shows that this approach achieves considerable savings compared to state of the art methods, retaining accuracy and fast convergence. We also present results on a synthetic biology data set showing the potential for practical usefulness of our work.

Highlights

  • Discrete state, continuous time stochastic processes such as Markov Jump Processes (MJP) (Gardiner 1985) are popular mathematical models used in a wide variety of scientificB Anastasis Georgoulas and technological domains, ranging from systems biology to computer networks

  • As a way of improving the behaviour of the sampler, we examined the use of the so-called Monte Carlo within Metropolis (MCWM) pseudomarginal variant (Beaumont 2003), in which the estimate of the likelihood of the current state of the chain is recomputed at every step

  • Results on the LV model show that methods based on random truncations achieve very considerable improvements in performance compared to the Gibbs sampler

Read more

Summary

Introduction

Continuous time stochastic processes such as Markov Jump Processes (MJP) (Gardiner 1985) are popular mathematical models used in a wide variety of scientific In response to these developments, researchers in the statistics, machine learning and systems biology communities have been addressing inverse problems for MJPs using a variety of methods, from variational techniques (Cohn et al 2010; Opper and Sanguinetti 2008) to particle-based (Hajiaghayi et al 2014; Zechner et al 2014) and auxiliary variable sampling methods (Rao and Teh 2013). Standard MCMC methods rely on likelihood computations, which are computationally or mathematically infeasible for pMJPs with a large or unbounded number of states. We conclude the paper with a discussion of our contribution in the light of existing research and possible future directions in systems biology

Population Markov jump processes
Uniformisation and inference
Efficient Gibbs sampling for finite state pMJPs
Unbounded state-spaces
Expanding the likelihood
Random truncations
Metropolis–Hastings sampling
Modified Gibbs sampling
Calculate the acceptance ratio α:
Results
Variance of the estimator
Benchmark data sets
SIR epidemic model
Genetic toggle switch
Related work
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call