Abstract
Since 2018, deep fakes technology has been one of the areas in which artificial intelligence has evolved dramatically and thus, deep fakes are primarily seen by governments as an emerging threat. In particular, regulators are increasingly concerned by the developments and applications of this technology in two main areas: image-based sexual abuse and disinformation. Despite its increasing popularity, there are challenges in defining what deep fakes are and what ought to be regulated when it comes to deep fake phenomena. The following article aims to analyze the EU-level regulatory approach to deep fakes in relation to AI regulation. This choice is motivated by the inclusion of deep fakes in the proposed EU Artificial Intelligence Act and the nature of the provisions that apply to deep fake technology within the Act. The first part will analyze the issues and challenges of adopting a legal definition for deep fakes to highlight consensus and differences among scholars and industry players. Getting the scope of the definition right is essential to address appropriately the distinct harm profile stemming from deep fake technology, specifically in relation to image-based sexual abuse and disinformation. A survey of different views shows a consensus over two elements that define deep fakes: the use of AI-based technology and the intent of the creator. However, there are practical challenges to this seemingly consensual definition, particularly when it comes to drawing boundaries between deep fakes and lower AV manipulation (i.e.: cheap fakes) and to co-opting the term to discredit audiovisual content, and casting doubts on the veracity of AV content presented as evidence. The second part focuses on the transparency requirements for deep fakes under the proposed EU Artificial Intelligence Act proposal. This obligation will be examined in light of disclosure and labelling obligations already tested in disinformation strategies, particularly in the implementation of the EU Code of Practice on Disinformation 2018, which will likely include deep fakes in its new iteration to be published in spring 2022. Among the main lessons from the application of the Code of Practice on Disinformation, it is clear that labels alone are not an effective measure to counter disinformation or deter its creation and dissemination. Moreover, if users are to rely on labels to weigh whether they are interacting with manipulated media, more research is needed into effective design since newer forms of enhancing transparency are available but not necessarily implemented by companies. This is particularly relevant in the context of the proposed Artificial Intelligence Act, since scholars have serious concerns of its enforcement architecture. Finally, the third part of this article illustrates a brief comparative approach between the United States and the United Kingdom regulatory responses to deep fakes to assess further the current EU response to this phenomenon. In contrast to the EU response, which so far is based on minimal transparency requirements, other jurisdictions' trend has been primarily to criminalize the malicious use of deep fakes which is often assimilated to revenge pornography even though these are two different phenomena. For all of these reasons, deep fakes are at the intersection of different possible regulatory frameworks providing an interesting case to explore the regulatory challenges of AI in the context of the European Union.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.