Abstract
This study aims to uncover the underlying psychological mechanism through which individuals attribute ethical responsibility to conversational artificial intelligence (AI). Furthermore, this study delves into the implications of AI’s unethical behavior on consumer evaluation. In Study 1, the results showed that participants in the high (vs. low) anthropomorphic AI condition judged greater responsibility for unethical behavior by AI, while lessening the AI developer’s ethical responsibility. In addition, the effect of anthropomorphism on ethical responsibility was mediated by perceived freewill. In Study 2, a significant interaction effect between perceived freewill and communication strategy is found, suggesting that when a high degree of AI freewill is perceived, the accommodative (vs. defensive) communication strategy is more effective in reducing the perception of the unethical behavior of AI. Conversely, the defensive strategy was more effective when perceived freewill was low. This study reveals the psychological mechanism through which individuals expect ethical responsibility from conversational AI, which has theoretical implications for broadening the understanding of human–AI interaction, and discusses the practical implications of proposing an AI communication strategy.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.