Abstract

Can crowd workers be trusted to judge whether news-like articles circulating on the Internet are misleading, or does partisanship and inexperience get in the way? And can the task be structured in a way that reduces partisanship? We assembled pools of both liberal and conservative crowd raters and tested three ways of asking them to make judgments about 374 articles. In a no research condition, they were just asked to view the article and then render a judgment. In an individual research condition, they were also asked to search for corroborating evidence and provide a link to the best evidence they found. In a collective research condition, they were not asked to search, but instead to review links collected from workers in the individual research condition. Both research conditions reduced partisan disagreement in judgments. The individual research condition was most effective at producing alignment with journalists’ assessments. In this condition, the judgments of a panel of sixteen or more crowd workers were better than that of a panel of three expert journalists, as measured by alignment with a held out journalist’s ratings.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.