Abstract

ABSTRACT Legal rules are based on an imagined regulatory scene that contains presumptions about the reality a regulation addresses. Regarding automated decision-making (ADM), these include a belief in the ‘good human decision’ that is mirrored in the cautious approach in the GDPR. Yet the ‘good human decision’ defies psychological insight into human weaknesses in decision-making. Instead, it reflects a general unease about algorithmic decisions. Against this background I explore how algorithms become part of human relationships and whether the use of decision systems causes a conflict with human needs, values and the prevailing socio-legal framework. Inspired by the concept of Human-Centered AI, I then discuss how the law may address the apprehension towards decision systems. I outline a human-focused approach to regulating ADM that focuses on improving the practice of decision-making. The interaction between humans and machines is an essential part of the regulation. It must address socio-legal changes caused by decision systems both to integrate them into the existing value system and adapt the latter to changes brought forth by ADM. A human-focused approach thus connects the benefits of technology with human needs and societal values.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.