Abstract

The goal of this article is to argue that the debate regarding algorithmic decision-making and its impact on fundamental rights is not well-addressed and should be reframed in order to allow for adequate regulatory policies regarding recent technological developments in automation. A review of the literature on algorithms and an analysis of Articles 6, IX and 20 of the Brazilian Federal Law n° 13.709/2018 (LGPD) lead to the conclusion that claims that algorithmic decisions are unlawful because of profiling or because they replace human analysis are imprecise and do not identify the real issues at hand. Profiles are nothing more than generalizations, largely accepted in legal systems, and there are many kinds of decisions based on generalizations which algorithms can adequately make with no human intervention. In this context, this article restates the debate about automated decisions and fundamental rights focusing on two main obstacles: (i) the potential for discrimination by algorithmic systems and (ii) accountability of their decision-making processes. Lastly, the arguments put forward are applied to the current case of the covid-19 pandemic to illustrate the challenges ahead.

Highlights

  • Decision-making carried out by algorithms is no longer science fiction

  • The goal of this article is to argue that the debate regarding algorithmic decision-making and its impact on fundamental rights can be better addressed in order to allow for adequate regulatory policies regarding recent technological developments in automation

  • This article aims at restating the debate about automated decisions and fundamental rights focusing on two main obstacles: (i) the potential for discrimination by algorithmic systems and (ii) accountability of their decision-making processes

Read more

Summary

Introduction

Decision-making carried out by algorithms is no longer science fiction. Beyond the automation scenario where computers execute tasks following detailed instructions given by human programmers, recent developments in artificial intelligence, machine learning, and the emergence of Big Data have made it possible for computers to learn from large databases and “program themselves”. Through advanced software and processors, machines are able to handle extensive data and draw conclusions from it without explicit instructions on what to look for and how, making inferences, spotting correlations and identifying patterns.2 This kind of algorithmic systems is largely applied to problem-solving in the most relevant areas of our individual and social lives: for example, an algorithm may decide whether we get a job or whether we have access to a line of credit. To contribute to better policy designs, this article focuses on improving the diagnosis of the issue at hand It aims to address two different but related inquiries: (i) if the use of algorithmic systems in decision-making is harming human dignity, and if so, (ii) whether that is a result of profiling, or rather a problem related to the removal of human judgment from the equation. A concrete case analysis: it applies the findings to decision-making scenarios related to the ongoing covid-19 pandemic

Striving for individualization and human scrutiny
Discrimination and accountability
Findings
Conclusion: algorithms and the case of ICU beds during the covid-19 pandemic
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call