ABSTRACT This article focuses on the legal contextualization of the generation of fake audiovisual sexual content based on replacing and shuffling facial or voice data using artificial intelligence (AI) technology without the previous consent, commonly known as ‘non-consensual sexualized deepfakes'. The legal analysis will be limited to examination of the impact of non-consensual sexualized deepfakes on the gender equality spectrum and gender-based violence context. For this reason, the analysis will be built upon the European AI Act – the applied legal framework when AI software is used – the European Directive on combating violence against women and domestic violence (GBV Directive) – which criminalizes the generation of sexualized deepfakes – and the Digital Service Act (DSA) – the legislative framework regulating online platforms, where mostly sexualized deepfakes are distributed. To better understand how sexualized deepfakes are generated, we will examine their generation and detection techniques through a short overview from a technical lens. The methodology used is built on the feminist critical approach – based on the ‘King Kong’ theory and abolitionist feminism – to the doctrinal analysis of the relevant European legislation. After having captured the harms of deepfake technology, legal solutions will be provided through this cross-disciplinary perspective.
Read full abstract