Abstract

One of the major reasons for using language models in speech recognition is to reduce the search space. Context-free grammars or finite state grammars are suitable for this purpose. However, these models ignore the stochastic characteristics of a language. In this paper, three stochastic language models are investigated. These models are 1) a trigram model of Japanese syllables, 2) a stochastic shift/reduce model in LR parsing, and 3) a trigram model of context-free rewriting rules. These stochastic language models are incorporated into the syntax-directed HMM-based speech recognition system, and tested by phrase recognition experiments. The phrase recognition rate is improved from 88.2% to 93.2%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call