Abstract

Crowdsourcing is a common means of collecting image segmentation training data for use in a variety of computer vision applications. However, designing accurate crowd-powered image segmentation systems is challenging because defining object boundaries in an image requires significant fine motor skills and hand-eye coordination, which makes these tasks error-prone. Typically, special segmentation tools are created and then answers from multiple workers are aggregated to generate more accurate results. However, individual tool designs can bias how and where people make mistakes, resulting in shared errors that remain even after aggregation. In this paper, we introduce a novel crowdsourcing workflow that leverages multiple tools for the same task to increase output accuracy by reducing systematic error biases introduced by the tools themselves. When a task can no longer be broken down into more-tractable subtasks (the conventional approach taken by microtask crowdsourcing), our multi-tool approach can be used to further improve accuracy by assigning different tools to different workers. We present a series of studies that evaluate our multi-tool approach and show that it can significantly improve aggregate accuracy in semantic image segmentation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call