Abstract

The emergence of increasingly capable artificial intelligence (AI) systems has raised concerns about the potential extreme risks associated with them. The issue has drawn substantial attention in academic literature and compelled legislators of regulatory frameworks like the European Union AI Act (AIA) to readapt them to the new paradigm. This paper examines whether the European Parliament’s draft of the AIA constitutes an appropriate approach to address the risks derived from frontier models. In particular, we discuss whether the AIA reflects the policy needs diagnosed by recent literature and determine if the requirements falling on providers of foundation models are appropriate, sufficient, and durable. We find that the provisions are generally adequate, but insufficiently defined in some areas and lacking in others. Finally, the AIA is characterized as an evolving framework whose durability will depend on the institutions’ ability to adapt to future progress.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call