Abstract
Conditional particle filters (CPFs) are powerful smoothing algorithms for general nonlinear/non-Gaussian hidden Markov models. However, CPFs can be inefficient or difficult to apply with diffuse initial distributions, which are common in statistical applications. We propose a simple but generally applicable auxiliary variable method, which can be used together with the CPF in order to perform efficient inference with diffuse initial distributions. The method only requires simulatable Markov transitions that are reversible with respect to the initial distribution, which can be improper. We focus in particular on random walk type transitions which are reversible with respect to a uniform initial distribution (on some domain), and autoregressive kernels for Gaussian initial distributions. We propose to use online adaptations within the methods. In the case of random walk transition, our adaptations use the estimated covariance and acceptance rate adaptation, and we detail their theoretical validity. We tested our methods with a linear Gaussian random walk model, a stochastic volatility model, and a stochastic epidemic compartment model with time-varying transmission rate. The experimental findings demonstrate that our method works reliably with little user specification and can be substantially better mixing than a direct particle Gibbs algorithm that treats initial states as parameters.
Highlights
In statistical applications of general state space hidden Markov models (HMMs), commonly known as state space models, it is often desirable to initialise the latent state of the model with a diffuse initial distribution
Our approach may be seen as an instance of the general ‘pseudo-observation’ framework of Fearnhead and Meligkotsidou (2016), but we are unaware of earlier works about the specific class of methods we focus on here
We presented a simple general auxiliary variable method for the Conditional particle filters (CPFs) for HMMs with diffuse initial distributions and focused on two concrete instances of it: the FDI-CPF for a uniform initial density M1 and the DGI-CPF for a Gaussian M1
Summary
In statistical applications of general state space hidden Markov models (HMMs), commonly known as state space models, it is often desirable to initialise the latent state of the model with a diffuse (uninformative) initial distribution (cf. Durbin and Koopman 2012). We mean by ‘diffuse’ the general scenario, where the first marginal of the smoothing distribution is highly concentrated relative to the prior of the latent Markov chain, which may be improper. The conditional particle filter (CPF) (Andrieu et al 2010), and in particular its backward sampling variants (Whiteley 2010; Lindsten et al 2014), has been found to provide efficient smoothing even with long data records, both empirically (e.g. Fearnhead and Künsch 2018) and theoretically
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have