Abstract

Abstract The full-scale invasion of Ukraine and crimes against humanity accompanying it have been fuelled by the mass spread of fakes and hatred incitements, forcing the largest online platforms to review and strengthen their content moderation policies. However, the approaches taken by platforms have not been perfect, and some of them could even exacerbate the situation. All in all, this is another evidence of the need to develop mechanisms being able to cope with the challenges to online speech and safety caused by dramatic social events. Basic approaches to address content moderation issues developed by now are self-(co-) and state regulation, on the one hand, and contract and human rights law, on the other. However, neither of them taken separately can ensure the needed level of protection of human rights online. Thus, in this article the ways to combine and improve these approaches are proposed. On the one hand, there is a need to dwell on private law mechanisms allowing to ensure the protection of human rights by virtue of judgements in private disputes. On the other hand, state regulation should be improved by international instruments allowing to provide for a uniform approach to regulation at a global scale.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call