Abstract
ABSTRACT This article focuses on the legal contextualization of the generation of fake audiovisual sexual content based on replacing and shuffling facial or voice data using artificial intelligence (AI) technology without the previous consent, commonly known as ‘non-consensual sexualized deepfakes'. The legal analysis will be limited to examination of the impact of non-consensual sexualized deepfakes on the gender equality spectrum and gender-based violence context. For this reason, the analysis will be built upon the European AI Act – the applied legal framework when AI software is used – the European Directive on combating violence against women and domestic violence (GBV Directive) – which criminalizes the generation of sexualized deepfakes – and the Digital Service Act (DSA) – the legislative framework regulating online platforms, where mostly sexualized deepfakes are distributed. To better understand how sexualized deepfakes are generated, we will examine their generation and detection techniques through a short overview from a technical lens. The methodology used is built on the feminist critical approach – based on the ‘King Kong’ theory and abolitionist feminism – to the doctrinal analysis of the relevant European legislation. After having captured the harms of deepfake technology, legal solutions will be provided through this cross-disciplinary perspective.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.