Abstract

Conventional speech recognition systems are based on hidden Markov models (HMMs) with Gaussian mixture models (GHMMs). Discriminative log-linear models are an alternative modeling approach and have been investigated recently in speech recognition. GHMMs are directed models with constraints, e.g., positivity of variances and normalization of conditional probabilities, while log-linear models do not use such constraints. This paper compares the posterior form of typical generative models related to speech recognition with their log-linear model counterparts. The key result will be the derivation of the equivalence of these two different approaches under weak assumptions. In particular, we study Gaussian mixture models, part-of-speech bigram tagging models, and eventually, the GHMMs. This result unifies two important but fundamentally different modeling paradigms in speech recognition on the functional level. Furthermore, this paper will present comparative experimental results for various speech tasks of different complexity, including a digit string and large-vocabulary continuous speech recognition tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call