Abstract

The integration of generative artificial intelligence (GenAI) systems into our daily lives has led to the phenomenon of "AI hallucination," where AI produces convincing yet incorrect information, undermining both user experience and system credibility. This study investigates the impact of AI's responses, specifically appreciation and apology, on user perception and trust following AI errors. Utilizing attribution theory, we explore whether users prefer AI systems that attribute errors internally or externally and how these attributions affect user satisfaction. A qualitative methodology, featuring interviews with individuals aged 20 to 30 who have experience with conversational AI, has been employed. Respondents preferred AI to apologize in hallucination situations and to attribute the responsibility for the error to the outside world. Results show that transparency in error communication is essential for maintaining user trust, with detailed explanations. The research contributes to the understanding of how politeness and attribution strategies can influence user engagement with AI and has significant implications for AI development, emphasizing the need for error communication strategies that balance transparency and user experience.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call