Abstract

AbstractThis paper deals with the use of a stochastic context‐free grammar (SCFG) for large vocabulary continuous speech recognition; in particular, an SCFG with phrase‐level dependency rules is built. Unlike n‐gram models, the SCFG can describe not only local constraints but also global constraints pertaining to the sentence as a whole, thus making possible language models with great expressive power. However, the inside‐outside algorithm must be used for estimation of the SCFG parameters, which involves a great amount of calculation, proportional to the third power of the number of nonterminal symbols and of the input string length. Hence, due to problems in dealing with extensive text corpora, the SCFG has hardly been applied as a language model for very large vocabulary continuous speech recognition. The proposed phrase‐level dependency SCFG allows a significant reduction of the computational load. In experiments with the EDR corpus, the proposed method proved effective. In experiments with the Mainichi corpus, a large‐scale phrase‐level dependency SCFG was built for a very large vocabulary continuous speech recognition system. Speech recognition tests with a vocabulary of about 5000 words showed that the proposed method could not compare with the trigram model in performance; however, when it was used in combination with a trigram model, the error rate was reduced by 14% compared to the trigram model alone. © 2002 Wiley Periodicals, Inc. Syst Comp Jpn, 33(13): 48–59, 2002; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/scj.1172

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call