Abstract

AbstractThis article looks at three main issues raised by the PNR scheme: (i) the base‐rate fallacy and its effect on false positives; (ii) built‐in biases; and (iii) opacity and unchallengeability of the decisions generated, and at whether the Court has properly addressed them. It concludes that the AG and the Court failed to address the evidentiary issues including the base‐rate fallacy—a lethal defect. It also finds that neither the Member States nor the Commission have even tried to assess whether the operation of the PNR Directive has resulted in discriminatory outputs or outcomes; and that the Court should have demanded that they produce serious, verifiable data on this, including on whether the PNR system has led in practice to discrimination. But it also finds that the AG and the Court provided important guidance on the third issue, in that they made clear that the use of unexplainable and hence unreviewable and unchallengeable “black box” machine‐learning artificial intelligence (ML/AI) systems violates the very essence of the right to an effective remedy. This means that any EU Member State that still uses such opaque ML/AI systems in its PNR screening will be in violation of the law.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call