Abstract

Artificial Intelligence (AI)-based surveillance technologies such as facial recognition, emotion recognition and other biometric technologies have been rapidly introduced by both public and private entities all around the world, raising major concerns about their impact on fundamental rights, the rule of law and democracy. This article questions the efficiency of the European Commission’s Proposal for Regulation of Artificial Intelligence, known as the AI Act, in addressing the threats and risks to fundamental rights posed by AI biometric surveillance systems. It argues that in order to meaningfully address risks to fundamental rights the proposed classification of these systems should be reconsidered. Although the draft AI Act acknowledges that some AI practices should be prohibited, the multiple exceptions and loopholes should be closed, and in addition new prohibitions, in particular to emotional recognition and biometric categorisation systems, should be added to counter AI surveillance practices violating fundamental rights. The AI Act should also introduce stronger legal requirements, such as third-party conformity assessment, fundamental rights impact assessment, transparency obligations as well as enhance existing EU data protection law and the rights and remedies available to individuals, thus not missing the unique opportunity to adopt the first legal framework that truly promotes trustworthy AI.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call