Abstract

This chapter elaborates on the Hidden Markov Models (HMM). In general, a HMM is a type of stochastic modeling appropriate for nonstationary stochastic sequences whose statistical properties undergo distinct random transitions among a set of, say, k different stationary processes. HMMs are used to model piecewise stationary processes. A stationary process is one whose statistical properties do not change with time. It is assumed that a set of observations (feature vectors), x1, x2, . , xN Є Rl are given. Each observation is allowed to be generated (emitted) by a different source. Each source is described by different statistical properties. Assuming two sources (stationary processes), k = 2, one may generate data points sequentially, according to either a Gaussian or a Chi-square distribution. Each observation may have been emitted by either of the two sources, but one does not have access to that information. A hidden Markov model is a way to model such a nonstationary process. During recognition, it is assumed that one has more than one HMM, each one described by a different set of parameters. Each HMM models a different piecewise stationary process. Given an observation sequence and a number, M, of HMMs (each one modeling a different process), the goal of the recognition phase is to decide which one of the HMMs is more likely to have emitted the received sequence.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.