Abstract
Fitting stochastic kinetic models represented by Markov jump processes within the Bayesian paradigm is complicated by the intractability of the observed-data likelihood. There has therefore been considerable attention given to the design of pseudo-marginal Markov chain Monte Carlo algorithms for such models. However, these methods are typically computationally intensive, often require careful tuning and must be restarted from scratch upon receipt of new observations. Sequential Monte Carlo (SMC) methods on the other hand aim to efficiently reuse posterior samples at each time point. Despite their appeal, applying SMC schemes in scenarios with both dynamic states and static parameters is made difficult by the problem of particle degeneracy. A principled approach for overcoming this problem is to move each parameter particle through a Metropolis-Hastings kernel that leaves the target invariant. This rejuvenation step is key to a recently proposed hbox {SMC}^2 algorithm, which can be seen as the pseudo-marginal analogue of an idealised scheme known as iterated batch importance sampling. Computing the parameter weights in hbox {SMC}^2 requires running a particle filter over dynamic states to unbiasedly estimate the intractable observed-data likelihood up to the current time point. In this paper, we propose to use an auxiliary particle filter inside the hbox {SMC}^2 scheme. Our method uses two recently proposed constructs for sampling conditioned jump processes, and we find that the resulting inference schemes typically require fewer state particles than when using a simple bootstrap filter. Using two applications, we compare the performance of the proposed approach with various competing methods, including two global MCMC schemes.
Highlights
Markov jump processes (MJPs) are routinely used to describe the dynamics of discrete-valued processes evolving continuously in time
We focus on the MJP representation of a stochastic kinetic model (SKM), whereby transitions of species in a reaction network are described probabilistically via an instantaneous reaction rate or hazard, which depends on the current system state and a set of rate constants, with the latter typically the object of inference
We model the system with a Markov jump process (MJP), so that for an infinitesimal time increment dt, the probability of a type i reaction occurring in the time interval (t, t + dt] is hi (Xt, ci )dt
Summary
Markov jump processes (MJPs) are routinely used to describe the dynamics of discrete-valued processes evolving continuously in time. The output of the algorithm can be used to estimate the model evidence at virtually no additional computational cost This feature is useful in the context of model selection, for example, when choosing between competing reaction networks based on a given data set. The simplest implementation of SMC2 uses a bootstrap filter over dynamic states in both the reweighting and move steps This is likely to be inefficient unless the noise in the measurement error process dominates the intrinsic stochasticity in the MJP. In this case, highly variable estimates of the observed-data likelihood will lead to small effective sample sizes, increasing the rate at which the resample-move step is triggered.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.