Abstract

Abstract The development and interest in decision-making that is or can be automated have opened the doors of debate regarding the form and substance of related means of regulating its application. Part of this discourse involves proposals advocating for the creation of a new human right not to be subject to an automated decision. This article questions whether such a right is necessary in light of existing substantive rules under legal frameworks already applicable to automated decision-making, specifically data protection, non-discrimination and human rights. There are also procedural challenges requiring treatment if automated decision-making is to be adequately addressed by application of the law. Exploring these challenges helps appreciate the significance of ensuring that existing substantive law is better implemented for the purpose of protecting human beings in settings where automated decision-making poses risks to individuals and groups.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call