Abstract

Disputes over transactions on two-sided platforms are common and are usually arbitrated through platforms’ customer service departments or third-party service providers. In this paper, we study crowd-judging, a novel crowd-sourcing mechanism whereby users (buyers and sellers) volunteer as jurors to decide disputes arising from the platform. To understand this phenomenon, we use a rich dataset from the dispute resolution center at Taobao, a leading Chinese e-commerce platform. While this mechanism enhances resolution speed, there are concerns that crowd-jurors may exhibit a form of in-group bias (where buyers favor the buyer and sellers favor the seller in a dispute), and that such in-group bias may systematically sway case outcomes given the majority of users on such platforms are buyers. We find evidence consistent with this concern: on average, a seller juror is approximately 10% likelier to vote for a seller. Such bias is 70% higher among cases that are less clear-cut and decided by a thin margin. Conversely, the bias reduces dramatically as users gain crowd-judging experience: in-group bias when jurors have the sample-median level of experience is 95% lower than when jurors are completely inexperienced. This suggests learning-by-doing may mitigate biases associated with socioeconomic identification. Partly due to this learning effect, our simulation shows that in-group bias influences the outcomes of no more than 2% of cases under the current randomized case allocation process, and can be further reduced under dynamic policies that better allocate experienced jurors. Such findings offer promising evidence that crowd-sourcing can be an effective dispute resolution mechanism to govern online platforms, and that properly designed operating policies can further improve its efficacy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call