Abstract

Governments are increasingly using artificial intelligence to improve workflows and services. Applications range from predicting climate change, crime, and earthquakes to flu outbreaks, low air quality, and tax fraud. Artificial agents are already having an impact on eldercare, education, and open government, enabling users to complete procedures through a conversational interface. Whether replacing humans or assisting them, they are the technological fix of our times. In two experiments and a follow-up study, we investigate factors that influence the acceptance of artificial agents in positions of power, using attachment theory and disappointment theory as explanatory models. We found that when the state of the world provokes anxiety, citizens perceive artificial agents as a reliable proxy to replace human leaders. Moreover, people accept artificial agents as decision-makers in politics and security more willingly when they deem their leaders or government to be untrustworthy, disappointing, or immoral. Finally, we discuss these results with respect to theories of technology acceptance and the delegation of duties and prerogatives.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call