Abstract

Pseudo relevance feedback (PRF) is commonly used to boost the performance of traditional information retrieval (IR) models by using top-ranked documents to identify and weight new query terms, thereby reducing the effect of query-document vocabulary mismatches. While neural retrieval models have recently demonstrated promising results for ad-hoc document retrieval, combining them with PRF is not straightforward due to incompatibilities between existing PRF approaches and neural architectures. To bridge this gap, we propose an end-to-end neural PRF framework, coined NPRF, that enriches the representation of user information need from a single query to multiple PRF documents. NPRF can be used with existing neural IR models by embedding different neural models as building blocks. Three state-of-the-art neural retrieval models, including the unigram DRMM and KNRM models, and the position-aware PACRR model, are utilized to instantiate the NPRF framework. Extensive experiments on two standard test collections, TREC1-3 and Robust04, confirm the effectiveness of the proposed NPRF framework in improving the performance of three state-of-the-art neural IR models. In addition, analysis shows that integrating the existing neural IR models within the NPRF framework results in reduced training and validation losses, and consequently, improved effectiveness of the learned ranking functions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call