Abstract

By providing a high degree of freedom to explore information, QA (question and answer) agents in museums are expected to help visitors gain knowledge on a range of exhibits. Since information exploration with a QA agent often involves a series of interactions, proper guidance is required to support users as they find out what they want to know and broaden their knowledge. In this paper, we validate topic recommendation strategies of system-initiative QA agents that suggest multiple topics in different ways to influence users’ information exploration, and to help users proceed to deeper levels in topics on the same subject, to offer them topics on various subjects, or to provide them with selections at random. To examine how different recommendations influence users’ experience, we have conducted a user study with 50 participants which has shown that providing recommendations on various subjects expands their interest on subjects, supports longer conversations, and increases willingness to use QA agents in the future.

Highlights

  • With the growing expectations of a high degree of freedom in questioning, QA agents have been applied to various domains, such as education, financial services, and counseling [1,2,3]

  • To evaluate the effects on the process of information exploration with QA agents according to given recommendations, we designed context-based topic recommendation strategies

  • Selections help to topics with information exploration with themes focused on help the Selections help to explore topics with

Read more

Summary

Introduction

With the growing expectations of a high degree of freedom in questioning, QA agents have been applied to various domains, such as education, financial services, and counseling [1,2,3]. Due to the enormity of cultural heritage digitally archived in museum collections, people find the exploration of such a huge repository of information to be intimidating and difficult For this reason, QA agents are considered to be a reasonable solution to guide people in navigating these knowledge spaces effectively [6]. QA agents in many contexts receive natural text input from users, and prior studies have mainly focused on improving the accuracy of how to understand the intention of users’ queries [8,9] Such interaction with user initiative is appropriate when the user knows which questions to ask, but common users may find it burdensome to type proper questions that can be interpreted by the system [6,10]. We conducted a user test with 50 participants to investigate the effect of level and topic in expanding users’ interest across topics, engaging them in information exploration, and their willingness to use QA agents in the future

Agent Initiative Strategies for Helping Users
Context-Based Database Construction
Show Me the Way
4.4.Methodology
Questionnaire Analysis
Results
Participants the participants’
Iwant to ask easier than what
Exploration on the Same Subject
Analysis of Topic Precision
Analysis of Topic Exploration
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call