Abstract

The human in the loop is often advocated as a panacea against concerns about AI-powered machines, which increasingly take decisions of consequence in all realms of life. However, can we rely on humans to prevent unethical decisions by machines? We run online experiments modeling both the case where the machine serves as a corrective to the human and where the human serves as a corrective to the machine. Our results suggest that, in the former case, humans make similar decisions whether the corrective is a machine or another human. In the latter case, humans take advantage of rather than correct bad decisions by machines, turning into partners in crime. These findings caution us not to count too much on the human in the loop as a moral corrective. Instead, they tend to argue for human–machine decision-making where the human makes the decision and the machine is the corrective.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call