Abstract

In the paper the problem of language modeling for automatic speech recognition in loose word order languages is considered. In loose word order languages classical n-gram language models are less effective, because the ordered word sequences encountered in the language corpus used to build the language models are less specific than in the case of strict word order languages. Because a word set appearing in the phrase is likely to appear in other permutation, all permutations of word sequences encountered in the corpus should be given additional likelihood in the language model.We propose the method of n-gram language model construction which assigns additional probability to word tuples being permutations of word sequences found in the training corpus. The paradigm of backoff bigram language model is adapted. The modification of typical model construction method consists in increasing the backed-off probability of bigrams that never appeared in the corpus but which elements appeared in the same phrases separated by other words. The proposed modification can be applied to any method of language model construction that is based on ML probability discounting. The performances of various LM creation methods adapted with the proposed way were compared in the application to Polish speech recognition.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call