Abstract

ABSTRACT The article discusses the human rights implications of algorithmic decision-making in the social welfare sphere. It does so against the background of the 2020 Hague’s District Court judgment in a case challenging the Dutch government’s use of System Risk Indication—an algorithm designed to identify potential social welfare fraud. Digital welfare state initiatives are likely to fall short of meeting basic requirements of legality and protecting against arbitrariness. Moreover, the intentional opacity surrounding the implementation of algorithms in the public sector not only hampers the effective exercise of human rights but also undermines proper judicial oversight. The analysis unpacks the relevance and complementarity of three legal/regulatory frameworks governing algorithmic systems: data protection, human rights law and algorithmic accountability. Notwithstanding these frameworks’ invaluable contribution, the discussion casts doubt on whether they are well-suited to address the legal challenges pertaining to the discriminatory effects of the use of algorithmic systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call