Abstract

The paper introduces the research area of interactive information retrieval (IIR) from a historical point of view. Further, the focus here is on evaluation, because much research in IR deals with IR evaluation methodology due to the core research interest in IR performance, system interaction and satisfaction with retrieved information. In order to position IIR evaluation, the Cranfield model and the series of tests that led to the Cranfield model are out lined. Three iconic user-oriented studies and projects that all have contributed to how IIR is perceived and under stood today are presented: The MEDLARS test, the Book House fiction retrieval system, and the OKAPI project. On this basis the call for alternative IIR evaluation approaches motivated by the three revolutions (the cognitive, the relevance, and the interactive revolutions) put forward by Robertson & Hancock-Beaulieu (1992) is presented. As a response to this call the ‘IIR evaluation model’ by Borlund (e.g., 2003a) is introduced. The objective of the IIR evaluation model is to facilitate IIR evaluation as close as possible to actual information searching and IR process es, though still in a relatively controlled evaluation environment, in which the test instrument of a simulated work task situation plays a central part.

Highlights

  • Interactive information retrieval (IIR), known as human-computer information retrieval (HCIR) (Marchionini, 2006), concerns the study and evaluation of users’ interaction with IR systems and their sat-c Pia Borlund, 2013All JISTaP content is Open Access, meaning it is accessible online to everyone, without fee and authors’ permission

  • 5.1 The IIR Evaluation Model The IIR evaluation model meets the requirements of the three revolutions put forward by Robertson and Hancock-Beaulieu (1992) in that the model builds on three basic components: (1) the involvement of potential users as test participants; (2) the application of dynamic and individual information needs; and (3) the employment of multidimensional and dynamic relevance judgements

  • The co-existing useroriented IR research is exemplified with The MEDLARS test, and the development of end-user IR systems (OPACs) emphasised the need for alternative approaches to IR evaluation

Read more

Summary

INTRODUCTION

Interactive information retrieval (IIR), known as human-computer information retrieval (HCIR) (Marchionini, 2006), concerns the study and evaluation of users’ interaction with IR systems and their sat-. ‘Interactive’ implies the involvement of human users in contrast to the notion of information retrieval (IR) only, which points to the system-oriented approach to IR signified with the Cranfield model (Cleverdon & Keen, 1966; Cleverdon, Mills & Keen, 1966). This approach is referred to as TREC style evaluation (e.g., Belkin, 2008). Robertson and Hancock-Beaulieu (1992) explain the change and shift in focus that led to the establishing of the research area of IIR with the presentation of the three revolutions: the cognitive revolution, the relevance revolution, and the interactive revolution.

THE CRANFIELD MODEL
USER-ORIENTED IR
THE THREE REVOLUTIONS
IIR EVALUATION AS OF TODAY
CONCLUDING REMARKS AND FURTHER READINGS
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call