Abstract

The increasing usage of algorithmic decision-making (ADM) systems has led to new and partially urgent challenges for the law, specifically in the field of data protection. Decisions made by (classic and “intelligent”) algorithms can make people feel powerless and the underlying opaqueness makes it hard to understand the reasons for a specific decision. This also increases the danger of discriminating results, as reproducing if decisions were (indirectly) based on forbidden characteristics becomes increasingly hard. Especially on the private market, consequences for individuals and society as a whole can be problematic. Much discussion has revolved around the question of how to achieve more transparency to increase regulation and allow accountability for those using ADM systems. These discussions mostly focus on transparency-enhancing instruments the General Data Protection Regulation (GDPR) offers. While the GDPR offers a promising array of such instruments for data subjects and public authorities, specific instruments for public transparency are missing. The chapter discusses the notion of a public database that gives graduated access to information concerning ADM systems used by companies, allowing analyzing algorithms’ consequences and enabling individuals to make more informed decisions. Allowing such access would make it necessary to consider affected companies’ justified interests but could further overall societal trust and acceptance while increasing control. The contribution tries to analyze how some of the GDPR’s provisions (such as Articles 20 and 35) can help with this endeavor, draw comparisons to similar regulatory approaches in other areas (such as Environmental Law) and make specific recommendations for action.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call