Abstract

Human verbal explanations are essentially interactive. If someone is giving a complex explanation, the hearer will be given the opportunity to indicate whether they are following as the explanation proceeds, and if necessary interrupt with clarification questions. These interactions allow the speaker to both clear up the hearer's immediate difficulties as they arise, and to update assumptions about their level of understanding. Better models of the hearer's level of understanding in turn allow the speaker to continue the explanation in a more appropriate manner, lessening the risk of continuing confusion. Despite its apparent importance, existing explanation and text generation systems fail to allow for this sort of interaction. Although some systems allow follow-up questions at the end of an explanation, they assume that a complete explanation has been planned and generated before such interactions are allowed. However, for complex explanations interactions with the user should take place as the explanation progresses, and should influence how that explanation continues. This paper describes the EDGE system, which is able to plan complex, extended explanations which allow such interactions with the user. The system can update assumptions about the user's knowledge on the basis of these interactions, and uses this information to influence the detailed further planning of the explanation. When the user appears confused, the system can attempt to fill in missing knowledge or to explain things another way.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call