Abstract

AbstractAfter a period of self‐regulation, countries around the world began to implement regulations for the removal of terrorist content from tech platforms. However, much of this regulation has been criticised for a variety of reasons, most prominently for concerns of infringing free speech and creating unfair burdens for smaller platforms. In addition to this, regulation is heavily centred around content moderation, however, fails to consider or address the psychosocial risks it poses to human content moderators. This paper argues that where regulation has been heavily criticised yet continues to inspire similar regulation a new regulatory approach is required. The aim of this paper is to undertake an introductory examination of the use of a social regulation approach in three other industries (environmental protection, consumer protection and occupational health and safety) to learn and investigate new regulatory avenues that could be applied to the development of new regulation that seeks to counter terrorist content on tech platforms and is concerned with the safety of content moderators.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.