Abstract

Several questions remain unanswered by the existing literature concerning the deployment of query-dependent features within learning to rank. In this work, we investigate three research questions in order to empirically ascertain best practices for learning-to-rank deployments. (i) Previous work in data fusion that pre-dates learning to rank showed that while different retrieval systems could be effectively combined, the combination of multiple models within the same system was not as effective. In contrast, the existing learning-to-rank datasets (e.g., LETOR), often deploy multiple weighting models as query-dependent features within a single system, raising the question as to whether such a combination is needed. (ii) Next, we investigate whether the training of weighting model parameters, traditionally required for effective retrieval, is necessary within a learning-to-rank context. (iii) Finally, we note that existing learning-to-rank datasets use weighting model features calculated on different fields (e.g., title, content, or anchor text), even though such weighting models have been criticized in the literature. Experiments addressing these three questions are conducted on Web search datasets, using various weighting models as query-dependent and typical query-independent features, which are combined using three learning-to-rank techniques. In particular, we show and explain why multiple weighting models should be deployed as features. Moreover, we unexpectedly find that training the weighting model's parameters degrades learned model's effectiveness. Finally, we show that computing a weighting model separately for each field is less effective than more theoretically-sound field-based weighting models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call