Abstract

We present a new approach to dynamically create and manage different language models to be used on a spoken dialogue system. We apply an interpolation based approach, using several measures obtained by the Dialogue Manager to decide what LM the system will interpolate and also to estimate the interpolation weights. We propose to use not only semantic information (the concepts extracted from each recognized utterance), but also information obtained by the dialogue manager module (DM), that is, the objectives orgoals the user wants to fulfill, and the proper classification of those concepts according to the inferred goals. The experiments we have carried out show improvements over word error rate when using the parsed concepts and the inferred goals from a speech utterance for rescoring the same utterance. Index Terms: spoken dialogue systems, dynamic language modeling, automatic speech recognition

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call