ABSTRACT The dissemination of hate speech online necessitates forceful content moderation to protect individuals and democratic values, without undue infringement of freedom of expression and the right to access information. In Europe, recent regulatory measures like the Digital Services Act (DSA) address the amplification of harmful content on social media, and place responsibilities on Very Large Online Platforms to counter societal risks, such as hate speech. The DSA mandates that platforms balance commercial interests with protecting user rights and safety, necessitating nuanced moderation strategies. However, both automated and human moderation face challenges in accurately identifying and countering hate speech. The European Convention on Human Rights (ECHR) and its interpretation by the European Court of Human Rights (ECtHR) provide essential guidance for navigating how to counter hate speech and also protect freedom of expression. This article aims to contribute to the understanding the emerging landscape of platform regulation and how the demands in the DSA can be better understood and operationalized regarding moderation of hate speech, in light of ECtHR case law.