Abstract
ABSTRACT Legal rules are based on an imagined regulatory scene that contains presumptions about the reality a regulation addresses. Regarding automated decision-making (ADM), these include a belief in the ‘good human decision’ that is mirrored in the cautious approach in the GDPR. Yet the ‘good human decision’ defies psychological insight into human weaknesses in decision-making. Instead, it reflects a general unease about algorithmic decisions. Against this background I explore how algorithms become part of human relationships and whether the use of decision systems causes a conflict with human needs, values and the prevailing socio-legal framework. Inspired by the concept of Human-Centered AI, I then discuss how the law may address the apprehension towards decision systems. I outline a human-focused approach to regulating ADM that focuses on improving the practice of decision-making. The interaction between humans and machines is an essential part of the regulation. It must address socio-legal changes caused by decision systems both to integrate them into the existing value system and adapt the latter to changes brought forth by ADM. A human-focused approach thus connects the benefits of technology with human needs and societal values.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have