Abstract

Organizations today can use both crowds and experts to produce knowledge. While prior work compares the accuracy of crowd-produced and expert-produced knowledge, we compare bias in these two models in the context of contested knowledge, which involves subjective, unverifiable, or controversial information. Using data from Encyclopedia Britannica, authored by experts, and Wikipedia, an encyclopedia produced by an online community, we compare the slant and bias of pairs of articles on identical topics of U.S. politics. Our slant measure is less (more) than zero when an article leans toward Democratic (Republican) viewpoints, while bias is the absolute value of the slant. We find that Wikipedia articles are more slanted toward Democratic views than are Britannica articles, as well as more biased. The difference in bias between a pair of articles decreases with more revisions. The bias on a per word basis hardly differs between the sources because Wikipedia articles tend to be longer than Britannica articles. These results highlight the pros and cons of each knowledge production model, help identify the scope of the empirical generalization of prior studies comparing the information quality of the two production models, and offer implications for organizations managing crowd-based knowledge production.

Highlights

  • Recent technological advances have made it significantly easier for organizations to harness the collective intelligence of online communities (e.g., Afuah and Tucci 2012; Gu et al 2007; Kane and Fichman 2009; Zhang and Zhu 2011; Gorbatai 2014; Gallus forthcoming)

  • We find that Wikipedia articles are more slanted towards Democratic views than are Britannica articles, as well as more biased

  • REGRESSION RESULTS We examine the differences in slant and bias via a regression framework that controls for shared unobservable factors

Read more

Summary

Introduction

Recent technological advances have made it significantly easier for organizations to harness the collective intelligence of online communities (e.g., Afuah and Tucci 2012; Gu et al 2007; Kane and Fichman 2009; Zhang and Zhu 2011; Gorbatai 2014; Gallus forthcoming). Little is known about how well collective decision making performs along other dimensions of information quality, such as objectivity, consistency, relevance, and timeliness We address this gap by examining two broad and related questions in the context of contested knowledge, defined loosely as a debate between viewpoints in which there is no single “right answer”: (1) Is knowledge produced by crowd-based organizations more or less biased than the knowledge produced by experts? Statistical identification requires controlling for unobserved factors that shape slants and biases of each topic, especially when those unobserved factors are shared by two sources We overcome both challenges by developing a matched sample of paired articles appearing at the same time in both sources and covering identical or nearly identical topics in US politics. It controls for otherwise unobserved factors shared by the two articles of the same topic

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call