Abstract

We present the results of an extensive study of the combination of multiple Information Retrieval systems, and also introduce a new fusion model (the Adaptive Combination of Evidence, or ACE, model) that determines which IR systems to listen to based on the content of the document being scored. We compare the results of using the ACE model on a standard data set from the Text Retrieval Conference (TREC) to two baseline models (a simple sum and a weighted sum) in a variety of tasks and settings. These settings are chosen to reflect a cross-product of various dimensions of both experimental inquiry and real-world IR environments, providing a comprehensive view of the role of fusion in IR. We verify that one baseline system does, on average, provide improvements in performance. Although the ACE model only outperforms the better of baselines in one setting (tying or slightly underperforming in others), our analysis shows that it exhibits interesting and desirable behaviour that could be exploited given enough training data.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.