Abstract

Pseudo-relevance feedback (PRF) has evident potential for enriching the representation of short queries. Traditional PRF methods treat top-ranked documents as feedback, since they are assumed to be relevant to the query. However, some of these feedback documents may actually distract from the query topic for a range of reasons and accordingly downgrade PRF system performance. Such documents constitute negative examples (negative feedback) but could also be valuable in retrieval. In this paper, a novel framework of query language model construction is proposed in order to improve retrieval performance by integrating both positive and negative feedback. First, an improvement-based method is proposed to automatically identify the types of feedback documents (i.e. positive or negative) according to whether the document enhances the retrieval’s effectiveness. Subsequently, based on the learned positive and negative examples, the positive feedback models and the negative feedback models are estimated using an Expectation-Maximization algorithm with the assumptions: the positive term distribution is affected by the context term distribution and the negative term distribution is affected by both the positive term distribution and the context term distribution (such that the positive feedback model upgrades the rankings of relevant documents and the negative feedback model prunes the irrelevant documents from a query). Finally, a content-based representativeness criterion is proposed in order to obtain the representative negative feedback documents. Experiments conducted on the TREC collections demonstrate that our proposed approach results in better retrieval accuracy and robustness than baseline methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call