Abstract

The purpose of this paper is to analyse the rules of the General Data Protection Regulation on automated decision making in the age of Big Data and to explore how to ensure transparency of such decisions, in particular those taken with the help of algorithms. The GDPR, in its Article 22, prohibits automated individual decision-making, including profiling. On the first impression, it seems that this provision strongly protects individuals and potentially even hampers the future development of AI in decision making. However, it can be argued that this prohibition, containing numerous limitations and exceptions, looks like a Swiss cheese with giant holes in it. Moreover, in case of automated decisions involving personal data of the data subject, the GDPR obliges the controller to provide the data subject with 'meaningful information about the logic involved' (Articles 13 and 14). If we link this information to the rights of data subject, we can see that the information about the logic involved needs to enable him/her to express his/her point of view and to contest the automated decision. While this requirement fits well within the broader framework of GDPR's quest for a high level of transparency, it also raises several queries particularly in cases where the decision is taken with the help of algorithms: What exactly needs to be revealed to the data subject? How can an algorithm-based decision be explained? Apart from technical obstacles, we are facing also intellectual property and state secrecy obstacles to this 'algorithmic transparency'.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call