Abstract

Algorithms are playing an increasingly important role in many areas of public policy, from forecasting elections to predicting criminal recidivism. In most of these cases, the algorithms are viewed as additional tools for use by judges, analysts and policy-makers – a form of hybrid decision-making. But such hybridization relies on the level of trust people have in these algorithms, both by the policy-maker and the public. This paper reports the results of a series of experiments on individual trust in algorithms for forecasting political events and criminal recidivism. We find that people are quite trusting in algorithms relative to other sources of advice, even with minimal information about the algorithm or when they are explicitly told that humans are just as good at the task. Using a conjoint experiment, we evaluate the factors that influence people’s preferences for these algorithms, finding that several of the factors of common concerns for scholars are of little concern for the public.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call