Abstract

In conversational search, agents can interact with users by asking clarifying questions to increase their chance of finding better results. Many recent works and shared tasks in both natural language processing and information retrieval communities have focused on identifying the need to ask clarifying questions and methodologies of generating them. These works assume that asking a clarifying question is a safe alternative to retrieving results. As existing conversational search models are far from perfect, it is possible and common that they could retrieve/generate bad clarifying questions. Asking too many clarifying questions can also drain a user’s patience when the user prefers searching efficiency over correctness. Hence, these models can backfire and harm a user’s search experience due to these risks from asking clarifying questions.   In this work, we propose a simulation framework to simulate the risk of asking questions in conversational search and further revise a risk-aware conversational search model to control the risk. We show the model’s robustness and effectiveness through extensive experiments on three conversational datasets — MSDialog, Ubuntu Dialog Corpus, and Opendialkg — in which we compare it with multiple baselines. We show that the risk-control module can work with two different re-ranker models and outperform all of the baselines in most of our experiments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call