Abstract

Users of sociotechnical systems often have no way to independently verify whether the system output which they use to make decisions is correct; they are epistemically dependent on the system. We argue that this leads to problems when the system is wrong, namely to bad decisions and violations of the norm of practical reasoning. To prevent this from occurring we suggest the implementation of defeaters: information that a system is unreliable in a specific case (undercutting defeat) or independent information that the output is wrong (rebutting defeat). Practically, we suggest to design defeaters based on the different ways in which a system might produce erroneous outputs, and analyse this suggestion with a case study of the risk classification algorithm used by the Dutch tax agency.

Highlights

  • We make more and more decisions in the context of sociotechnical systems, having to reason with the information we receive from the system and act based on the options it presents us with. Desiere et al (2019) offer a range of such examples in use by public employment services, and the use of COMPAS by the US judicial system and HireVue’s AI system that automatically scores job applicants are two more examples where users end up relying on system output to make decisions

  • We suggest that designers of sociotechnical systems look at the different reasons why system output might be wrong and consider what information might be helpful to the user to avoid those bad decisions

  • Users of sociotechnical systems are often in the difficult situation of having to make a decision based on system output while, at the same time, being unable to independently verify the correctness of that output

Read more

Summary

Introduction

We make more and more decisions in the context of sociotechnical systems, having to reason with the information we receive from the system and act based on the options it presents us with. Desiere et al (2019) offer a range of such examples in use by public employment services, and the use of COMPAS by the US judicial system and HireVue’s AI system that automatically scores job applicants are two more examples where users end up relying on system output to make (high impact) decisions. Desiere et al (2019) offer a range of such examples in use by public employment services, and the use of COMPAS by the US judicial system and HireVue’s AI system that automatically scores job applicants are two more examples where users end up relying on system output to make (high impact) decisions This sociotechnical context brings with it a conceptual challenge: how can we design the overarching systems such that their use leads to optimal decisions?. This is the ideal situation, contrasted by cases where a sociotechnical system, due to interactions between users and the automated parts, leads to bad decision making One such example is Iran Air Flight 655, which was mistaken for a military plane and shot down by USS Vincennes shortly after take-off (Rochlin, 1991).

Epistemic Dependence
Norms of Practical Reasoning
Defining Defeaters
Designing for Defeaters
Case Study
Defeaters Based on System Inaccuracy
Defeaters Based on Covariate Shift
Defeaters Based on Missing Features
Defeaters Based on System Bias
Findings
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call