Abstract

Dialogue-based conversational recommender systems (DCRSs) have become a new trend in recommender systems (RSs), allowing users to communicate with the system in natural language to facilitate feedback provision and product exploration. However, little work has been done to empirically study user perception of and interaction with such systems and, more importantly, how to best support users in providing feedback on the recommendation they receive. In this article, we aim to develop effective <i>critiquing</i> mechanisms for DCRS to improve its feedback elicitation process (i.e., allowing users to <i>critique</i> the current recommendation during the dialogue). Specifically, we have implemented three prototype systems featuring three different critiquing techniques, respectively, i.e., <i>user-initiated critiquing, progressive system-suggested critiquing</i>, and <i>cascading system-suggested critiquing</i>. We have then conducted two task-oriented user studies involving 292 subjects to evaluate the three prototypes. In particular, we consider two typical types of user tasks in RSs: basic recommendation task (BRT, i.e., looking for items according to the user&#x2019;s preferences), and exploration-oriented task (EOT, i.e., exploring different types of items). Results show that EOT stimulates more user interaction, while BRT results in higher user satisfaction. Moreover, when users perform EOT, the type of critiquing techniques is more likely to influence user perception and moderate the relationships between certain interaction metrics and users&#x2019; perceived serendipity. The findings suggest effective critiquing techniques to enhance the interaction between users and the recommendation chatbot when the system makes recommendations for different purposes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call