Abstract

An intelligent advisory system should be able to provide explanatory responses that correct mistaken user beliefs. This task requires the ability to form a model of the user's relevant beliefs and to understand and address feedback from users who are not satisfied with its advice. This paper presents a method by which a detailed model of the user's relevant domain-specific, plan-oriented beliefs can gradually be formed by trying to understand user feedback in an on-going advisory dialog. In particular, we consider the problem of constructing an automated advisor capable of participating in a dialog discussing which UNIX command should be used to perform a particular task. We show how to construct a model of a UNIX user's beliefs about UNIX commands from several different classes of user feedback. Unlike other approaches to inferring user beliefs, our approach focuses on inferring only the small set of beliefs likely to be relevant in contributing to the user's misconception. And unlike other approaches to providing advice, we focus on the task of understanding the user's descriptions of perceived problems with that advice.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call