Abstract

© Cambridge University Press 2011. Hidden Markov models (HMMs) are a rich family of probabilistic time series models with a long and successful history of applications in natural language processing, speech recognition, computer vision, bioinformatics, and many other areas of engineering, statistics and computer science. A defining property of HMMs is that the time series is modelled in terms of a number of discrete hidden states. Usually, the number of such states is specified in advance by the modeller, but this limits the flexibility of HMMs. Recently, attention has turned to Bayesian methods which can automatically infer the number of states in an HMM from data. A particularly elegant and flexible approach is to assume a countable but unbounded number of hidden states; this is the nonparametric Bayesian approach to hidden Markov models first introduced by Beal et al. [4] and called the infinite HMM (iHMM). In this chapter, we review the literature on Bayesian inference in HMMs, focusing on nonparametric Bayesian models. We show the equivalence between the Polya urn interpretation of the infinite HMM and the hierarchical Dirichlet process interpretation of the iHMM in Teh et al. [35]. We describe efficient inference algorithms, including the beam sampler which uses dynamic programming. Finally, we illustrate how to use the iHMM on a simple sequence labelling task and discuss several extensions.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.