Abstract

Search engines are widely used in our daily life. Batch evaluation of the performance of search systems to their users has always been an essential issue in the field of information retrieval. However, batch evaluation, which usually compares different search systems based on offline collections, cannot directly take the perception of users to the systems into consideration. Recently, substantial studies have focused on proposing effective evaluation metrics that model user behavior to bring human factors in the loop of Web search evaluation. In this survey, we comprehensively review the development of user behavior modeling for Web search evaluation and related works of different model-based evaluation metrics. From the overview of these metrics, we can see how the assumptions and modeling methods of user behavior have evolved with time. We also show the methods to compare the performances of model-based evaluation metrics in terms of modeling user behavior and measuring user satisfaction. Finally, we briefly discuss some potential future research directions in this field.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.