Abstract

AbstractThis poster describes a framework for investigating the effectiveness of query expansion term sets and reports the results of an investigation on the quality of query expansion terms coming from different sources: pseudo‐relevance feedback, web‐based expansion, interactive elicitations from human searchers, and expansion approaches based on query clarity.The conclusion regarding the experimental framework is that certain different evaluation approaches show a substantial level of correlation, and can therefore be used interchangeably according to convenience considerations.With regard to the actual comparison of different sources of expansion terms, the conclusion is that machines are better than humans at doing statistical calculations and at estimating which query terms are more likely to discriminate documents relevant for a given topic. One consequence is a recommendation for research in implicit relevance feedback approaches and novel interaction models based on ostention or mediation, which have shown great potential.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call