Abstract

The Artificial Intelligence Act (AI Act) may be a milestone of regulating artificial intelligence by the European Union. Regulatory framework proposed by the European Commission has the potential to serve as a benchmark worldwide and strengthen the position of the EU as one of the main players of the technology market. One of the components of the regulation are the provisions on deep fakes, which include the definition, classification as a “specific risk” AI system and transparency obligations. Deep fakes rightly arouse controversy and are assessed as a complex phenomenon, the negative use of which significantly increases the risk of political manipulation, and at the same time contributes to disinformation, undermining trust in information or in the media. The AI Act may strengthen the protection of citizens against some of the negative consequences of misusing deep fakes, although the impact of the regulatory framework in its current form will be limited due to the specificity of creating and disseminating deep fakes. The effectiveness of the provisions will depend not only on the enforcement capabilities, but also on the precision of phrasing provisions to prevent misinterpretation and deliberate abuse of exceptions. At the same time, the AI Act will not cover a significant part of deep fakes, which, due to the malicious intentions of their creators, will not be subject to the protection in the form of transparency obligations. This study allows for the analysis of provisions relating to deep fakes in the AI Act and proposing improvements that will take into account the specificity of this phenomenon to a greater extent.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call