Abstract

Pseudo relevance feedback (PRF) via query expansion assumes that the top ranked documents from the first-pass retrieval are relevant. The most informative terms in the pseudo relevant documents are then used to update the original query representation in order to boost the retrieval performance. Most current PRF approaches estimate the importance of the candidate expansion terms based on their statistics on document level. However, in traditional PRF approaches, the context information is always ignored in traditional query expansion models. Therefore, off-topic terms can also be selected, which may result in a decrease of retrieval performance. In this paper, we propose a context-based feedback framework based on Bayesian network, in which multiple context information can be taken into account. In order to demonstrate the effectiveness of our framework, we explore two different kinds of context in our experiments. The experimental results show that our proposed algorithm performs significantly better than a strong PRF baseline.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call