Abstract

In this paper, we describe a semantic interpreter and a cooperative response generator for a multimodal dialogue system which consists of speech input, touch screen input, speech output and graphical output. The dialogue system understands spontaneous speech which has many ambiguous phenomena such as interjections, ellipses, inversions, repairs, unknown words and so on, and responses to user for the utterance. But some utterances fail to be analyzed. This is caused due to "misrecognition" with the speech recognizer, "incompleteness" with the semantic interpreter and "lack of database" with the response generator. Therefore, we improved the semantic interpreter to build a more robust one. If a user's query does not have enough conditions/information to answer the question with the system, the dialogue manager should query the user to get the necessary conditions or to select the candidate. Further, if the system cannot retrieve any information related to the user's question, the generator should propose an alternative plan. Based on these considerations, we developed a cooperative response generator in the dialogue system. We report the evaluation results of the semantic interpreter and cooperative response generator in our dialogue system.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call