Abstract

The demand for intelligent virtual advisors in our rapidly advancing world is rising and, consequently, the need for understanding the reasoning process to answer why a particular piece of advice is provided to the user is directly increasing. Personalized explanation is regarded as a reliable way to improve the user's understanding and trust in the virtual advisor. So far, cognitive explainable agents utilize reason explanation by referring to their own mental state (beliefs and goals) to explain their own behaviour. However, when the explainable agent plays the role of a virtual advisor and recommends a behaviour for the human to perform, it is best to refer to the user's mental state, rather than the agent’s mental state, to form a reason explanation. In this paper, we are developing an explainable virtual advisor (XVA) that communicates with the user to elicit the user's beliefs and goals and then tailors its advice and explains it according to the user's mental state. We tested the proposed XVA with university students where the XVA provides tips to reduce the students' study stress. We measured the impact of receiving three different patterns of tailored explanations (belief-based, goal-based, and belief&goal-based explanation) in terms of the students' intentions to change their behaviours. The results showed that the intention to change is not only related to the explanation pattern but also to the user context, the relationship built with the agent, the type of behaviour recommended and the user’s current intention to do the behaviour.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call