Abstract

This article describes investigations into the use of phonologically-constrained morphological analysis (PCMA) in language modelling for continuous speech recognition. PCMA provides a means for modelling text as a sequence of morphemes in a way that retains compatibility with the linear concatenative model of pronunciation used in conventional decoders. Experiments were performed in English exploiting the 100-million-word British National Corpus as source material. We show that PCMA leads to smaller but more generative pronunciation lexicons, and that it does not weaken the quality of the acoustic decoding measured in terms of recognition lattices. For trigram language models, perplexity figures are poorer for PCMA over words, as might be expected given the reduction in sentence span. However recognition results show small improvements in accuracy under some conditions, particularly when morph lattices are decoded with word-trigram models. We explore the capabilities for PCMA across vocabulary size, language model training size, and post-processing strategy. The best results show a 16% relative reduction in word error rate.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call