Abstract

<p>Pre-retrieval Query Performance Prediction (QPP) methods are oblivious to the performance of the retrieval model as they predict query difficulty prior to observing the set of documents retrieved for the query. Among pre-retrieval query performance predictors, specificity-based metrics investigate how corpus, query and corpus-query level statistics can be used to predict the performance of the query. In this thesis, we explore how neural embeddings can be utilized to define corpus-independent and semantics-aware specificity metrics. Our metrics are based on the intuition that a term that is closely surrounded by other terms in the embedding space is more likely to be specific while a term surrounded by less closely related terms is more likely to be generic. On this basis, we leverage geometric properties between embedded terms to define four groups of metrics: (1) neighborhood-based, (2) graph-based, (3) cluster-based and (4) vector-based metrics. Moreover, we employ learning-to-rank techniques to analyze the importance of individual specificity metrics. To evaluate the proposed metrics, we have curated and publicly share a test collection of term specificity measurements defined based on Wikipedia category hierarchy and DMOZ taxonomy. We report on our extensive experiments on the effectiveness of our metrics through metric comparison, ablation study and comparison against the state-of-the-art baselines. We have shown that our proposed set of pre-retrieval QPP metrics based on the properties of pre-trained neural embeddings are more effective for performance prediction compared to the state-of-the-art methods. We report our findings based on Robust04, ClueWeb09 and Gov2 corpora and their associated TREC topics.</p>

Highlights

  • 1.1 Background and Problem StatementInformation Retrieval (IR) is obtaining material relevant to an information need which is represented by a query within comprehensive collections [1]

  • Empirical studies on pre-retrieval Query Performance Prediction (QPP) metrics have shown that while some metrics show better performance on some corpora and topic sets, there is no single metric or a set of metrics that outperforms the others on all topics and corpora [2]

  • We explored how the geometric properties of neural embeddings can be used to estimate the term specificity metrics in order to predict query performance

Read more

Summary

Introduction

1.1 Background and Problem StatementInformation Retrieval (IR) is obtaining material relevant to an information need which is represented by a query within comprehensive collections [1]. The last group of pre-retrieval methods, i.e., specificity-based query performance predictors, are the focus of this thesis. The objective of this thesis is to investigate the possibility of estimating term specificity by utilizing neural embedding-based representation of terms. Such specificity metrics can be utilized to predict query performance based on the idea that the more a query is specific, the higher performance it has. Predicting an information retrieval system performance can be beneficial from several aspects, e.g., we can find the proper retrieval system for a given query, or we can expand the query in order to represent the information need in a more effectively. The problem of predicting an information retrieval system performance for a given query is called Query Performance Prediction (QPP) [2]. The post-retrieval QPP methods are out of the scope of this thesis, and we only focus on pre-retrieval QPP methods

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call