ABSTRACT Democracies are under attack from various sides. In recent years AI-powered techniques such as profiling, targeting, election manipulation, and massive disinformation campaigns via social bots and troll farms challenge the very foundations of democratic systems. Against this background, demands for regulating AI have gotten louder. In this paper, we focus on the European Union (EU) as the actor that has gone the furthest in terms of regulating AI. We therefore ask: What kind of instruments does the EU envision in their binding and non-binding documents to prevent AI harm to democracy? And what critique can be formulated regarding these instruments? To address these questions, the article makes two contributions. First, by building on a systematic understanding of deliberative democracy, we introduce the distinction between two types of harm that can arise from the widespread use of AI: rights-based harm and systemic harm. Second, by analysing a number of EU documents, including the GDPR, the AI Act, the TTAP, and the DSA, we argue that the EU envisions four primary instruments for safeguarding democracy from the harmful use of AI: prohibition, transparency, risk management, and digital education. While these instruments provide a relatively high level of protection for rights-based AI harm, there is still ample space for these technologies to produce systemic harm to democracy.