The adoption of the long-awaited Digital Services Act (DSA) is undoubtedly one of the more significant successes related to the implementation of the ambitious EU Digital Strategy. In addition to important announcements that the new law will help to transform the next few years into Europe’s digital decade, an update of the liability framework for digital service providers also provided an opportunity for a broader reflection on the principles of building governance in cyberspace. Indeed, the notice and takedown model, which had been in place for more than two decades, had become progressively eroded, leading service providers to increasingly implement proactive content filtering mechanisms in an effort to reduce their business risk. The aim of this article is to explore those changes introduced by the DSA which affect the regulatory environment for the preventive blocking of unlawful online content. In this respect, relevant conclusions of the ECtHR and CJEU jurisprudence will also be presented, as well as reflections on the possibility and need for a more coherent EU strategy with respect to online content filtering. The analysis presented will focus on filtering mechanisms concerning mainly with what is referred to as clearly illegal content, as the fight against the dissemination of this type of speech, often qualified under the general heading of “hate speech”, is one of the priority tasks for public authorities with respect to building trust in digital services in the EU.
Read full abstract