Abstract

In this article we focus on Maximum Likelihood estimation (MLE) for the static model parameters of hidden Markov models (HMMs). We will consider the case where one cannot or does not want to compute the conditional likelihood density of the observation given the hidden state because of increased computational complexity or analytical intractability. Instead we will assume that one may obtain samples from this conditional likelihood and hence use approximate Bayesian computation (ABC) approximations of the original HMM. Although these ABC approximations will induce a bias, this can be controlled to arbitrary precision via a positive parameter ϵ, so that the bias decreases with decreasing ϵ. We first establish that when using an ABC approximation of the HMM for a fixed batch of data, then the bias of the resulting log- marginal likelihood and its gradient is no worse than \(\mathcal{O}(n\epsilon)\), where n is the total number of data-points. Therefore, when using gradient methods to perform MLE for the ABC approximation of the HMM, one may expect parameter estimates of reasonable accuracy. To compute an estimate of the unknown and fixed model parameters, we propose a gradient approach based on simultaneous perturbation stochastic approximation (SPSA) and Sequential Monte Carlo (SMC) for the ABC approximation of the HMM. The performance of this method is illustrated using two numerical examples.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.