Abstract

Hidden Markov models (HMMs) are stochastic models widely used in speech and image processing in recent years. The number of states in a classical HMMs is usually predefined and fixed during training, and may be quite different from the real number of hidden states of the signal source. Moreover, in pattern recognition applications, different signal sources probably have different state numbers, thereby cannot be well modeled by HMMs with a fixed state number. This paper proposes a self-adaptive design method of HMMs to overcome this limitation. According to this design, an HMM automatically matches its state number to the real state number of the signal source being modeled. To realize a practicable training algorithm for the new HMM, this paper first introduces an entropic definition of the a priori probability of the model and accordingly a maximum a posteriori (MAP) training strategy, and then designs an MAP training algorithm in the case of fixed state number based on the deterministic annealing (DA) technique. Based on this MAP training, a complete training method named shrink algorithm is finally proposed for the new HMM. Experimental results indicate that self-adaptive HMMs can model stochastic signals more accurately and have better performance in pattern recognition than classical models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call