Abstract

Statistical language model, trained by a large number of text corpus, is an integral component in many speech and natural language model processing systems, such as speech recognition and machine translation. It is a probabilistic model which describes the distribution pattern of natural language. Over the last few decades, N-gram language model (LM) is the most popular technique since it is simple and effective. However, the training of the N-gram language model is based on the maximum likelihood rule resulting in suboptimal output in speech recognition systems. In this paper, a discriminative training based language model (DLM) which directly focused on minimizing speech recognition word error rate (WER) was employed to improve the performance of speech recognition system. In particular, the part-of-speech (POS) feature was used to train DLM as well as the n-gram features. Experimental results showed that DLM with n-gram features gave 1% absolute reduction in word error rate (WER). Combining n-gram features with POS feature, DLM could obtain another 0.4% absolute reduction in WER.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call