Abstract

Automated decision making is becoming the norm across large parts of society, which raises interesting liability challenges when human control over technical systems becomes increasingly limited. This article defines “quasi‐automation” as inclusion of humans as a basic rubber‐stamping mechanism in an otherwise completely automated decision‐making system. Three cases of quasi‐automation are examined, where human agency in decision making is currently debatable: self‐driving cars, border searches based on passenger name records, and content moderation on social media. While there are specific regulatory mechanisms for purely automated decision making, these regulatory mechanisms do not apply if human beings are (rubber‐stamping) automated decisions. More broadly, most regulatory mechanisms follow a pattern of binary liability in attempting to regulate human or machine agency, rather than looking to regulate both. This results in regulatory gray areas where the regulatory mechanisms do not apply, harming human rights by preventing meaningful liability for socio‐technical decision making. The article concludes by proposing criteria to ensure meaningful agency when humans are included in automated decision‐making systems, and relates this to the ongoing debate on enabling human rights in Internet infrastructure.

Highlights

  • There has been an increasing awareness in the last two decades that the Internet is a socio-technical system (Brey, 2005), comprising both human and technical aspects (Kitchin & Dodge, 2011)

  • For a wide variety of reasons, many organizations choose to keep a human in the loop when they operate automated technical systems. This approach is one of the most typical responses to mostly automated systems to ensure that the results provided by a computer algorithm are not the sole reason for decision making, but that the decision involves human decision making as well

  • Existing redress and remedy frameworks need to go beyond blaming a specific individual and providing damages, but rather instead need to ensure that the social-technical procedures that led to the rights violation are systematically changed. One area where this area of regulation has developed considerably is in regard to aviation, where less binary approaches to liability have developed over time. This approach is encapsulated in Regulation (EU) No 376/2014 on the reporting, analysis, and follow-up of occurrences in civil aviation, which calls for a “just culture” which is “a culture in which front-line operators or other persons are not punished for actions, omissions or decisions taken by them that are commensurate with their experience and training, but in which gross negligence, willful violations, and destructive acts are not tolerated.”11 This approach— evident in several legal decisions made in the aviation sector, which focus on organizational liability (Bru€ggemeier, 1991) rather than on individual liability—is increasingly common in court rulings (Schebesta, 2017), which focus on improving socio-technical systems at scale rather than on just identifying who is at fault

Read more

Summary

Ben Wagner

Automated decision making is becoming the norm across large parts of society, which raises interesting liability challenges when human control over technical systems becomes increasingly limited. Most regulatory mechanisms follow a pattern of binary liability in attempting to regulate human or machine agency, rather than looking to regulate both This results in regulatory gray areas where the regulatory mechanisms do not apply, harming human rights by preventing meaningful liability for socio-technical decision making. Esto da como resultado areas regulatorias grises donde los mecanismos reguladores no aplican, lo que perjudica los derechos humanos al evitar responsabilidades significativas para la toma de decisiones socio-tecnicas. El artıculo concluye al proponer criterios para garantizar una agencia significativa cuando se incluye a los seres humanos en los sistemas de toma de decisiones automatizados, y se relaciona con el debate en curso sobre la habilitacion de los derechos humanos en la infraestructura de Internet. PALABRAS CLAVES: automatizacion, asignacion de funciones, derechos humanos, arquitectura del Internet, polıticas tecnologicas, inteligencia artificial, algoritmos

Introduction
Police Searches Based on Passenger Name Records and Social Media Data
Outsourced Facebook Content Moderation
Assumption of Binary Liability as a Challenge
Conclusion and Paths Ahead

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.