Abstract

Modern-day online services are plagued by various kinds of harmful content, from hate speech to terrorist propaganda to depictions of the sexual abuse of children, to name just a few. In pressuring online service providers to better police harmful content on their services, regulators tend to focus on trust and safety techniques, such as automated systems for scanning or filtering content on a service, that depend on the provider’s capability to access the contents of users’ files and communications at will. We call these techniques content-dependent. The focus on content analysis overlooks the prevalence and utility of what this article calls content-oblivious techniques: ones that do not rely on guaranteed at-will access to content, such as metadata-based tools and users’ reports flagging abuse which the provider did not (or could not) detect on its own. This article presents the results of a survey about the trust and safety techniques employed by a group of online service providers, most of them communications services or services driven primarily by user-generated content. Our data include information from Facebook, Wikimedia, a global software company, and more, that collectively serve billions of users. The survey finds that abuse-reporting features are used by more providers than other techniques such as metadata-based abuse detection or automated systems for scanning content, but that the providers’ abuse-reporting tools do not consistently cover the various types of abuse that users may encounter on their services, a gap we recommend they rectify. Finally, despite strong consensus among participating providers that automated content scanning is the most useful means of detecting child sex abuse imagery, they do not consider it to be nearly so useful for other kinds of abuse. These results indicate that content-dependent techniques are not a silver bullet against abuse. They also indicate that the marginal impact on providers’ anti-abuse efforts of end-to-end encryption, which, controversially, stymies providers’ ability to access user content at will, can be expected to vary by abuse type. These findings have implications for policy debates over the regulation of online service providers’ anti-abuse obligations and their use of end-to-end encryption.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call