Abstract

The parameters of the standard Hidden Markov Model framework for speech recognition are typically trained via Maximum Likelihood. However, better recognition performance is achievable with discriminative training criteria like Maximum Mutual Information or Minimum Phone Error. While it is generally accepted that these discriminative criteria are better suited to minimizing Word Error Rate, there is very little qualitative intuition for how the improvements are achieved. Through a series of “resampling” experiments, we show that discriminative training (MPE in particular) appears to be compensating for a specific incorrect assumption of the HMM—that speech frames are conditionally independent.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call