Abstract

AbstractAfter a period of self‐regulation, countries around the world began to implement regulations for the removal of terrorist content from tech platforms. However, much of this regulation has been criticised for a variety of reasons, most prominently for concerns of infringing free speech and creating unfair burdens for smaller platforms. In addition to this, regulation is heavily centred around content moderation, however, fails to consider or address the psychosocial risks it poses to human content moderators. This paper argues that where regulation has been heavily criticised yet continues to inspire similar regulation a new regulatory approach is required. The aim of this paper is to undertake an introductory examination of the use of a social regulation approach in three other industries (environmental protection, consumer protection and occupational health and safety) to learn and investigate new regulatory avenues that could be applied to the development of new regulation that seeks to counter terrorist content on tech platforms and is concerned with the safety of content moderators.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call