Abstract

Platform governance helps align the activities of participating actors to deliver value within the platforms. These platforms can operate in environments where governance is intentionally or conventionally weak in favor of open access, frictionless transactions, or free speech. Such low- or no-governance environments leave room for illegitimate actors to penetrate platforms with illegitimate content or transactions. We propose that an external observer can employ transparency mechanisms to establish “soft” governance that allows participants in a low-governance environment to distinguish between sources of legitimate and illegitimate content. We examine how this might work in the context of disinformation Internet domains by training a machine learning classifier to discern between low-legitimacy from high-legitimacy content providers based on website registration data. The results suggest that an independent observer can employ such a classifier to provide an early, although imperfect, signal of whether a website is intended to host illegitimate content. We show that the independent observer can be effective at serving multiple platforms by providing intermediate prediction results that platforms can align with their unique governance priorities. We expand our analysis with a signaling game model to ascertain whether such a soft governance structure can be resilient to adversarial responses. Funding: Funding for this research was provided by UCL School of Management and Emory University. Supplemental Material: The online appendix is available at https://doi.org/10.1287/stsc.2023.0006 .

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call