Abstract

A training procedure is proposed for improving the discriminative power of a maximum likelihood (ML) hidden Markov model (HMM) without sacrificing its classification capability. The proposed discriminative HMM consists of a conventionally trained ML model and a discriminative model. The training data are utilized in two different modes. In the first mode, conventional ML models, denoted as master models, are trained. In the second mode, discriminative models, denoted as slave models, are trained by aligning training tokens of a certain word with all but the correct word master models and the model parameters are estimated by maximizing the conditional likelihood of the training tokens given the fact that they are aligned with incorrect-word master models. In recognition, the scores of the master models are reinforced by the scores of the input tokens, which are compared with corresponding slave models. A speaker-independent, 39-word, alpha-digit database was used to evaluate the new training procedure. Experimental results indicate that the new training can improve the recognition performance by 1%–2%. However, the discriminative power of the slave models decreases gradually when more sophisticated models and features are used.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call