Abstract

Academic scholars have leveraged crowd work platforms such as MTurk to conduct research and collect data, but the data quality crisis in crowd work has been an alarming phenomenon recently. Though prior studies have discussed data quality and validity issues in crowd work via surveys and experiments, they kind of neglected to explore the scholars’ and particularly the IRB's ethical concerns and the related policies in various ethical guidelines for crowd work-based research in these respects. In this study, we interviewed 17 scholars from six disciplines and 15 IRB directors and analysts in the U.S. and analyzed 28 research guidance documents to fill these gaps. We identified common themes among our interviewees and documents but also discovered distinctive and even opposing views regarding the approval rate, rejection, and internal/external research validity. Based on the findings, we discussed a potential Tragedy of the Commons regarding data quality deterioration and the disciplinary differences regarding validity in crowd work-based research. We further explored the origin of the data quality and validity issues in crowd work-based research. We advocated the IRB's ethical concerns in crowd work-based research be heard and respected further and be reflected in the ethical guidance for crowd work-based research. Finally, we proposed our research implications, limits, and future work.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call