Abstract

Topic language models, mostly revolving around the discovery of “word-document” co-occurrence dependence, have attracted significant attention and shown good performance in a wide variety of speech recognition tasks over the years. In this paper, a new topic language model, named word vicinity model (WVM), is proposed to explore the co-occurrence relationship between words, as well as the long-span latent topical information for language model adaptation. A search history is modeled as a composite WVM model for predicting a decoded word. The underlying characteristics and different kinds of model structures are extensively investigated, while the performance of WVM is thoroughly analyzed and verified by comparison with a few existing topic language models. Moreover, we also present a new modeling approach to our recently proposed word topic model (WTM), and design an efficient way to simultaneously extract “word-document” and “word-word” co-occurrence characteristics through the sharing of the same set of latent topics. Experiments on broadcast news transcription seem to demonstrate the utility of the presented models.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call