Abstract
Advancements in artificial intelligence (AI) have drastically simplified the creation of synthetic media. While concerns often focus on potential misinformation harms, ‘non-consensual intimate deepfakes’ (NCID) – a form of image-based sexual abuse – pose a current, severe, and growing threat, disproportionately impacting women and girls. This article examines the measures implemented with the recently adopted Online Safety Act 2023 (OSA) and argues that the new criminal offences and the ‘systems and processes’ approach the law adopts are insufficient to counter NCID in the UK. This is because the OSA relies on platform policies that often lack consistency regarding synthetic media and on platforms’ content removal mechanisms which offer limited redress to victim-survivors after the harm has already occurred. The article argues that stronger prevention mechanisms are necessary and proposes that the law should mandate all AI-powered deepfake creation tools to ban the generation of intimate synthetic content and require the implementation of comprehensive and enforceable content moderation systems.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Computer Law & Security Review: The International Journal of Technology Law and Practice
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.