Abstract

Electoral cybersecurity harm pertains to the use — or misuse — of information to sway the vote, sow confusion among the electorate, or affect the electorate’s confidence in the integrity of the election. Digital replicas, denoting content generated by or manipulated by digital technologies (such as deep-fakes, virtual reality, or augmented reality) that is capable of producing convincing but simulated audio-visual content of human likenesses, are emerging as a powerful set of disinformation tools that can cause cybersecurity harms to elections. Although the dissemination of disinformation has been an electoral concern historically, that concern has been exacerbated recently by this technologically facilitated ability to produce highly convincing audio-visual online disinformation, including simulated images of politicians, and to circulate the disinformation through online platforms to maximize its viral effects. Beyond the possible harms that can result from digital replicas against politicians in their personal capacity (for example, reputational harm or breach of privacy), the consequences of disinformation tactics employing digital replicas raise wider public harms, as digital replicas can deceive voters and pose a cybersecurity threat to elections and democratic processes. The harms of this form of disinformation are exacerbated in the election context, where democratic processes are implicated, and the risks may be amplified as the number of days before the final day of voting decreases. Yet, digital replicas pose a difficult problem for regulators: how to be comprehensive enough to mitigate the harmful effects of disinformation on voters’ access to information, yet avoid undue censorship or over-regulation that could stifle political communication and voters’ participation in democratic processes. The paper examines the various types of digital replicas that can distort the online political discourse and explains the cybersecurity implications. We canvass salient legal measures in election laws as well as laws pertaining to expression, including intellectual property, privacy, and defamation, that could apply to regulate election-related digital replicas. We then turn to self-regulatory mechanisms of content moderation practices by digital platforms, which range from policies that favor strong protections for political speech to policies that favor removing content that could be electoral disinformation. We explain why these hard law and self-regulatory systems insufficiently redress the elections-based harms arising from digital replicas, and we propose a digital right of reply to fill this gap. We provide a brief legal history of the international right of reply, which was formulated to address wartime propaganda, and which, we argue, provides a salient analogy for electoral disinformation. In our recommendations, we set forth a moderate proposal for a digital version of the “right to reply” to regulate digital replicas during the election period, including details for practical implementation and enforcement strategies. While the chapter focuses on Canada, we conclude with general lessons that may be applicable to other jurisdictions facing similar problems arising from digital replicas in the elections context, drawing on general principles for the regulation of technology and disinformation.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call