Abstract

AbstractIn recent years, the European Union (EU) has strongly promoted a human-centric and trustworthy approach to artificial intelligence (AI). The 2021 proposal for a Regulation on AI that the EU seeks to establish as a global standard is the latest step in the matter. However, little attention has been paid to the EU’s use of AI to pursue its own purposes, despite its wide use of digital technologies, notably in the field of border management. Yet, such attention allows us to confront the highly moral discourse that characterises EU institutions’ communications and legislative acts with a concrete example of how the promoted values are realised “on the ground”. From this perspective, this paper takes the case study of the European Travel Information and Authorisation System (ETIAS), an EU information technology system (planned to become operational in May 2023) that will provide travel authorisation to visa-exempt third-country nationals using a profiling algorithm. The paper shows, on the one hand, that ETIAS constitutes another piece in the massive infrastructure of digital surveillance of third-country nationals that the EU has been building for years. On the other hand, ETIAS’s algorithmic process is shown to be an instrument of differential exclusion that could well have an adverse impact on certain groups of foreign travellers. Ultimately, this paper argues that far from falling outside the scope of the trustworthy approach to AI championed by the EU, ETIAS – and more broadly the systematic risk evaluation predominant in the EU’s use of AI – is a constitutive part of it.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call