ABSTRACT In contemporary society, the rise of social media has dramatically transformed the sharing of information, bypassing traditional editorial and governmental controls. This shift has enabled rapid global information sharing but also raised concerns about the influence of social media platforms, even in democratic societies. Legislative responses, such as Germany's Network Enforcement Act of 2017, mandated swift removal of illegal content, influencing over twenty other nations to enact similar laws. These regulations often target hate speech but risk suppressing political opposition, particularly in authoritarian regimes. The European Union’s Digital Services Act came into force in 2024 and imposes stringent removal obligations on platforms. This enhanced platform regulation pushes companies towards the use of automated content moderation for purposes of meeting those obligations. A 2024 report found a substantial majority (87.5% to 99.7%) of deleted comments on Facebook and YouTube in France, Germany, and Sweden were legally permissible, suggesting that platforms, pages, or channels may be over-removing content to avoid regulatory penalties. Against the backdrop of this recent data, the paper examines the possible impact of stringent platform liability legislation on free speech in the ambit of hate speech, focusing on the delicate balance between regulation and freedom of expression.
Read full abstract