Abstract

The evaluation and tuning of information retrieval (IR) systems based on the Cranfield paradigm requires purpose built test collections, which include sets of human contributed relevance labels, indicating the relevance of search results to a set of user queries. Traditional methods of collecting relevance labels rely on a fixed group of hired expert judges, who are trained to interpret user queries as accurately as possible and label documents accordingly. Human judges and the obtained relevance labels thus provide a critical link within the Cranfield style IR evaluation framework, where disagreement among judges and the impact of variable judgment sets on the final outcome of an evaluation is a well studied issue. There is also reported evidence that experiment outcomes can be affected by changes to the judging guidelines or changes in the judge population. Recently, the growing volume and diversity of the topics and documents to be judged is driving the increased adoption of crowdsourcing methods in IR evaluation, offering a viable alternative that scales with modest costs. In this model, relevance judgments are distributed online over a large population of humans, a crowd, facilitated, for example, by a crowdsourcing platform, such as Amazon's Mechanical Turk or Clickworker. Such platforms allow millions of anonymous crowd workers to be hired temporarily for micro-payments to complete so-called human intelligence tasks (HITs), such as labeling images or documents. Studies have shown that workers come from diverse backgrounds, work in a variety of different environments, and have different motivations. For example, users may turn to crowdsourcing as a way to make a living, to serve an altruistic or social purpose or simply to fill their time. They may become loyal crowd workers on one or more platforms, or they may leave after their first couple of encounters. Clearly, such a model is in stark contrast to the highly controlled methods that characterize the work of trained judges. For example, in a micro-task based crowdsourcing setup, worker training is usually minimal or non-existent. Furthermore, it is widely reported that labels provided by crowd workers can vary in quality, leading to noisy labels. Crowdsourcing can also suffer from undesirable worker behaviour and practices, e.g., dishonest behaviour or lack of expertise, that result in low quality contributions. While a range of quality assurance and control techniques have now been developed to reduce noise during or after task completion, little is known about the workers themselves and possible relationships between workers' characteristics, behaviour and the quality of their work. In this talk, I will review the findings of recent research that examines and compares trained judges and crowd workers hired to complete relevance assessment tasks of varying difficulty. The investigations include a range of aspects from how HIT design, judging instructions, worker demographics and characteristics may impact work quality. The main focus of the talk will be on experiments aimed to uncover characteristics of the crowd by monitoring their behaviour during different relevance assessment tasks, and compare them to professional judges' behaviour on the same tasks. Throughout the talk I will highlight challenges of quality assurance and control in crowdsourcing and propose a possible direction for solving the issue without relying on gold standard data sets, which are expensive to create and have limited application.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.