Abstract

Explanations are important in many areas of human-computer interaction. In help systems, tutoring systems and within expert system, lengthy explanations of some topic or justifications of some reasoning process may be required. If a long explanation is given, there is a good chance that at some point the user will ‘lose track’, and fail to grasp the main content of the explanation. There has therefore been recent emphasis on generating explanations and textual descriptions that are tailored to the knowledge and goals of the particular user. However, there is no guarantee that such a model will be accurate. By allowing interactions with the user within the explanation this no longer becomes crucial. Then, if users are confused in the middle of an explanation they can interrupt and seek clarification, and the system may provide explicit checks on the user's understanding. Therefore this paper presents an approach to explanation generation based on the assumption that explanations must both use and track a model of what the user knows, and also involve interactions with the user. The framework is based on sociolinguistic studies of human-human interaction as well as artificial intelligence work on explanation, text planning, tutoring and user modelling. It has been implemented and used for generating tutorial explanatory dialogues in electronics.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call