Abstract

ABSTRACT The evaluation of the use of Artificial Intelligence (AI) in legal decisions still has unsolved questions. These may refer to the perceived degree of seriousness of the possible errors committed, the distribution of responsibility among the different decision-makers (human or artificial), and the evaluation of the error concerning its possible benevolent or malevolent consequences on the person sanctioned. Above all, assessing the possible relationships between these variables appears relevant. To this aim, we conducted a study through an online questionnaire (N = 288) where participants had to consider different scenarios in which a decision-maker, human or artificial, made an error of judgement for offences punishable by a fine (Civil Law infringement) or years in prison (Criminal Law infringement). We found that humans who delegate AIs are blamed less than solo humans, although the effect of decision maker was subtle. In addition, people consider the error more serious if committed by a human being when a sentence for a crime of the penal code is mitigated, and for an AI when a penalty for an infringement of the civil code is aggravated. The mitigation of the evaluation of seriousness for joint AI-human judgement errors suggests the potential for strategic scapegoating of AIs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call