Abstract
One of the major reasons for using language models in speech recognition is to reduce the search space. Context-free grammars or finite state grammars are suitable for this purpose. However, these models ignore the stochastic characteristics of a language. In this paper, three stochastic language models are investigated. These models are 1) a trigram model of Japanese syllables, 2) a stochastic shift/reduce model in LR parsing, and 3) a trigram model of context-free rewriting rules. These stochastic language models are incorporated into the syntax-directed HMM-based speech recognition system, and tested by phrase recognition experiments. The phrase recognition rate is improved from 88.2% to 93.2%.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.