Abstract

Commercial content moderation removes harassment, abuse, hate, or any material deemed harmful or offensive from user-generated content platforms. A platform’s content policy and related government regulations are forms of explicit language policy. This kind of policy dictates the classifications of harmful language and aims to change users’ language practices by force. However, the de facto language policy is the actual practice of language moderation by algorithms and humans. Algorithms and human moderators enforce which words (and thereby, content) can be shared, revealing the normative values of hateful, offensive, or free speech and shaping how users adapt and create new language practices. This paper will introduce the process and challenges of commercial content moderation, as well as Canada’s proposed Bill C-36 with its complementary regulatory framework, and briefly discuss the implications for language practices.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call