Abstract

Bayesian information gain (BIG) framework-based information exploration studies have generally focused on explicit interactions, such as user browsing via keyboard and mouse. However, if implicit information, such as eye-gaze data, is implemented in the BIG framework alongside explicit interactions, the system can accurately respond and proactively provide target information to the user. We, therefore, propose the BIGaze system, which uses real-time eye-gaze data to predict and recommend target information by analyzing the browsing and gazing action patterns corresponding to information searched by the user. To validate the performance of the proposed system within the information exploration process, we performed comparative user experiments with BIGaze, BIGbase (a BIG system without gaze data), and a non-BIG system. The experimental results reveal that BIGaze proactively captures user exploration targets, confidently generates user-intended recommendations, and assists a wide range of user exploration patterns.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call