Abstract

A brief review of the history of laboratory testing of information retrieval systems focuses on the idea of a general‐purpose test collection of documents, queries and relevance judgements. The TREC programme is introduced in this context, and an overview is given of the methods used in TREC. The Okapi team’s participation in TREC is then discussed. The team has made use of TREC to improve some of the automatic techniques used in Okapi, specifically the term weighting function and the algorithms for term selection for query expansion. The consequence of this process has been a very good showing for Okapi in terms of the TREC evaluation results. Some of the issues around the much more difficult problem of interactive evaluation in TREC are then discussed. Although some interesting interactive experiments have been performed at TREC, the problems of reconciling the requirements of the laboratory context with the concerns of interactive retrieval are still largely unresolved.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.