Abstract

This study examines the impact of ethical AI information on citizens' trust in and policy support for governmental AI projects. Unlike previous work on direct users of AI, this study focuses on the general public. Two online survey experiments presented participants with information on six types of ethical AI measures: legal compliance, ethics-by-design measures, data-gathering limitations, human-in-the-loop, non-discrimination, and technical robustness. Results reveal that general ethical AI information has little to no effect on trust, perceived trustworthiness or policy support among citizens. Prior attitudes and experiences, including privacy concerns, trust in government, and trust in AI, instead form good predictors. These findings suggest that short-term communication efforts on ethical AI practices have minimal impact. The findings suggest that a more long-term, comprehensive approach is necessary to building trust in governmental AI projects, addressing citizens' underlying concerns and experiences. As governments' use of AI becomes more ubiquitous, understanding citizen responses is crucial for fostering trust, perceived trustworthiness and policy support for AI-based policies and initiatives.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call