Abstract

AbstractIn this talk I consider sequential Monte Carlo (SMC) methods for hidden Markov models. In the scenario for which the conditional density of the observations given the latent state is intractable we give a simple ABC approximation of the model along with some basic SMC algorithms for sampling from the associated filtering distribution. Then, we consider the problem of smoothing, given access to a batch data set. We present a simulation technique which combines forward only smoothing (Del Moral et al, 2011) and particle Markov chain Monte Carlo (Andrieu et al 2010), for an algorithm which scales linearly in the number of particles.

Highlights

  • The hidden Markov model (HMM) is an important statistical model in many fields including Bioinformatics (e.g. Durbin et al (1998)), Econometrics (e.g. Kim, Shephard and Chib (1998)) and Population genetics (e.g. Felsenstein and Churchill (1996)); see Cappe, Ryden and Moulines (2005) for a recent overview

  • It follows from the previous section that performing Approximate Bayesian computation (ABC) maximum likelihood estimation (MLE) is equivalent to estimating the parameter by taking a data set generated by one of the original HMMs {Xk, Yk}k≥0 and finding the value of θ which maximises the likelihood of that data set under the corresponding perturbed HMM {Xk, Yk }k≥0

  • Further we provide an analysis of the performance of the noisy approximate Bayesian computation maximum likelihood estimation (ABC MLE) relative to the standard MLE by comparing their asymptotic variances

Read more

Summary

Introduction

The hidden Markov model (HMM) is an important statistical model in many fields including Bioinformatics (e.g. Durbin et al (1998)), Econometrics (e.g. Kim, Shephard and Chib (1998)) and Population genetics (e.g. Felsenstein and Churchill (1996)); see Cappe, Ryden and Moulines (2005) for a recent overview. The hidden Markov model (HMM) is an important statistical model in many fields including Bioinformatics Durbin et al (1998)), Econometrics Kim, Shephard and Chib (1998)) and Population genetics Felsenstein and Churchill (1996)); see Cappe, Ryden and Moulines (2005) for a recent overview. Often one has a range of HMMs parameterised by a parameter vector θ taking values in some compact subset Θ of Euclidian space. Given a sequence of observations Y1, . Yn the objective is to find the parameter vector θ∗ ∈ Θ that corresponds to the particular HMM from which the data were generated. A common approach to estimating θ∗ is maximum likelihood estimation (MLE). The parameter estimate, denoted θn, is obtained via maximizing.

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call