Abstract

We propose a new language model which represents long-term dependencies between word sequences using a multilevel hierarchy. We call this model MC/sub n//sup /spl nu//, where n is the maximum number of words in a sequence and /spl nu/ is the maximum number of levels. The originality of this model, which is an extension of the multigrams, is its ability to take into account long distance dependencies according to dependent variable-length sequences. In order to discover the variable-length sequences and to build the hierarchy, we use a set of 233 syntactic classes extracted from eight elementary grammatical classes of French. The MC/sub n//sup /spl nu// model learns hierarchical word patterns and uses them to reevaluate and filter the n-best utterance hypotheses output by our speech recognizer MAUD. The model has been trained on a corpus of 43 million words extracted from the French newspaper "Le Monde" and uses a vocabulary of 20 000 words. Tests have been conducted on 300 sentences. Compared to the class trigram and the baseline multigrams approach, we report a perplexity reduction of 17% and 20%, respectively. Rescoring the original n-best hypotheses resulted in an improvement of the word error rate: 7% and 2% compared to the class trigram and multigrams, respectively.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call