Abstract

We propose a method for obtaining and ranking paraphrased questions from crowds to be used as a part of instructions in microtask-based crowdsourcing. With our method, we are able to obtain questions that differ in expression yet have the same semantics with respect to the crowdsourcing task. This is done by generating tasks that give hints and elicit instructions from workers. We conducted experiments with data used for a real set of gold standard questions submitted to a commercial crowdsourcing platform and compared the results with those from a direct-rewrite method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call