Abstract

Online intermediaries have always been regulated, locked in heated battles around intermediary liability for copyright or privacy reasons (Tusikov, 2016; Gorwa 2019). But a notable trend is the rapidly growing use of policy to try and govern user-generated content with a host of other perceived social or individual harms, such as disinformation, hate speech, and terrorist propaganda (Kaye, 2019; York 2019; Suzor 2019). Even as increasing academic and policy attention is paid to the global ‘techlash’, and leading voices outlining the various ways in which expression online is currently under threat, our understanding of the overall policy landscape remains ad hoc and incomplete. The goal of this paper is thus to present some initial observations on the state of harmful content regulation around the world, drawing upon a new original dataset that seeks to capture the global universe of harmful-content regulatory initiatives for user-generated content online. The first part of the paper presents descriptive results, showing the evolution (and notable increase) in policy development in the past two decades. The second half of the paper provides insight into which specific issue areas have attracted the most formal and informal regulatory arrangements, and assesses the scope (what kind of actors are seen as being a ‘platform,’ and how that is defined), key policy mechanisms (takedown regimes, transparency rules, technical standards), and sanctioning procedures (fines, criminal liability) enacted in these regulations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call