Abstract

This report describes the work done at Oce Research for the Cross-Language Evaluation Forum (CLEF) 2003. This year we participated in seven mono-lingual tasks (all languages except Russian). We developed a generic probabilistic model that does not make use of global statistics from a document collection to rank documents. The relevance of a document to a given query is calculated using the term frequencies of the query terms in the document and the length of the document. We used the BM25 model, our new probabilistic model and (for Dutch only) a statistical model to rank documents. Our main goals were to compare the BM25 model and our probabilistic model, and to evaluate the performance of a statistical model that uses ’knowledge’ from relevance assessments from previous years. Furthermore, we give some comments on the standard performance measures used in the CLEF.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.