ABSTRACTThe EU Artificial Intelligence Act is the world's first attempt to holistically regulate artificial intelligence. It presents an extensive, multi‐faceted definition of deepfakes, and introduces specific safeguards against their misuses. These guardrails have the potential to become a global role model. The AI Act uses concepts that leave room for interpretation, which is important given the constant development of technology and the need for adjustments. However, some solutions raise the problem of vagueness, which in turn may result in a narrower interpretation of a linguistic nature, and reduce the scope of legally permissible countermeasures. The aim of this study is to critically evaluate the definition of deepfakes contained in the AI Act in relation to the word “existing” used. A narrow interpretation could potentially exclude some synthetic media from the scope of transparency obligations due to the non‐classification of these media as deepfakes. Therefore, a teleological interpretation of the provisions is proposed, reinforced with elements of systemicity, so that the safeguards built by the AI Act also include deepfakes that do not depict any identifiable pre‐existing persons, objects, places, entities or events to better reflect goals of the regulation, and complement the value‐based system of the AI Act.
Read full abstract