Abstract

Advancements in artificial intelligence (AI) have drastically simplified the creation of synthetic media. While concerns often focus on potential misinformation harms, ‘non-consensual intimate deepfakes’ (NCID) – a form of image-based sexual abuse – pose a current, severe, and growing threat, disproportionately impacting women and girls. This article examines the measures implemented with the recently adopted Online Safety Act 2023 (OSA) and argues that the new criminal offences and the ‘systems and processes’ approach the law adopts are insufficient to counter NCID in the UK. This is because the OSA relies on platform policies that often lack consistency regarding synthetic media and on platforms’ content removal mechanisms which offer limited redress to victim-survivors after the harm has already occurred. The article argues that stronger prevention mechanisms are necessary and proposes that the law should mandate all AI-powered deepfake creation tools to ban the generation of intimate synthetic content and require the implementation of comprehensive and enforceable content moderation systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call