Abstract

We show that noise can speed training in hidden Markov models (HMMs). The new Noisy Expectation-Maximization (NEM) algorithm shows how to inject noise when learning the maximum-likelihood estimate of the HMM parameters because the underlying Baum-Welch training algorithm is a special case of the Expectation-Maximization (EM) algorithm. The NEM theorem gives a sufficient condition for such an average noise boost. The condition is a simple quadratic constraint on the noise when the HMM uses a Gaussian mixture model at each state. Simulations show that a noisy HMM converges faster than a noiseless HMM on the TIMIT data set.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call