Abstract

People will not hold cars responsible for traffic accidents, yet they do when artificial intelligence (AI) is involved. AI systems are held responsible when they act or merely advise a human agent. Does this mean that as soon as AI is involved responsibility follows? To find out, we examined whether purely instrumental AIsystems stay clear of responsibility. We compared AI-powered with non-AI-powered car warning systems and measured their responsibility rating alongside their human users. Our findings show that responsibility is shared whenthe warning system is powered by AI but not by a purely mechanical system, even though people consider both systems as mere tools. Surprisingly, whetherthe warning prevents the accident introduces an outcome bias: the AI takes higher credit than blame depending on what the human manages or fails to do.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call