Abstract

Conventional training of a hidden Markov model (HMM) is performed by an expectation-maximization algorithm using a maximum likelihood (ML) criterion. It was reported that, using an incremental variant of maximum a posteriori estimation, substantial speed improvements could be obtained. The approach requires a prior distribution when the training starts, although it is difficult to find an appropriate prior for some cases. This paper presents a new approach for achieving an efficient training of HMM parameters using the standard ML criterion. A prior distribution is not required. The algorithm sequentially selects a subset of data from the training set, updates the parameters from the subset, then iterates until convergence. There is a solid theoretical foundation that ensures a monotone likelihood improvement; thus stable convergence is guaranteed. Experimental results indicate substantially faster convergence than the standard batch training algorithm while holding the same level of recognition performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call