Developing a chatbot to handle citizen requests in a municipal office requires multiple design choices. We use public value theory to test how value positions shape these design choices. In a conjoint experiment, we asked German citizens (n = 1690) and front desk officers in municipalities (n = 267) to evaluate hypothetical chatbot designs that differ in their fulfillment of goals derived from different value positions: (1) maintaining security, privacy, and accountability, (2) improving administrative performance, and (3) improving user-friendliness and empathy. Experimental results show that citizens prefer chatbots programmed by domestic firms, value chatbots taking routine decisions excluding discretion, and strongly prefer human intervention when conversations fail. While altering the salience of public sector values through priming does not affect citizens' design choices consistently, we find systematic differences between citizens and front desk officers. However, these differences are qualitative rather than fundamental. We conclude that citizens and front desk officers share public values that provide a sufficient basis for chatbot designs that overcome a potential legitimacy gap of AI in citizens-state service encounters.
Read full abstract- Home
- Search
Year
Publisher
Journal
1
Institution
Institution Country
Publication Type
Field Of Study
Topics
Open Access
Language
Reset All
Filter 1
Cancel
Year
Publisher
Journal
1
Institution
Institution Country
Publication Type
Field Of Study
Topics
Open Access
Language
Reset All
Filter 1
Export
Sort by: Relevance