Abstract
Clear instructions are a necessity for obtaining accurate results from crowd workers. Even small ambiguities can force workers to choose an interpretation arbitrarily, resulting in errors and inconsistency. Crisp instructions require significant time to design, test, and iterate. Recent approaches have engaged workers to detect and correct ambiguities. However, this process increases the time and money required to obtain accurate, consistent results. We present TaskLint, a system to automatically detect problems with task instructions. Leveraging a diverse set of existing NLP tools, TaskLint identifies words and sentences that might foretell worker confusion. This is analogous to static analysis tools for code ("linters"), which detect possible features in code that might indicate the presence of bugs. Our evaluation of TaskLint using task instructions created by novices confirms the potential for static tools to improve task clarity and the accuracy of results, while also highlighting several challenges.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Proceedings of the AAAI Conference on Human Computation and Crowdsourcing
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.