Abstract
This study investigates the effects of AI versus human source attribution on trust and forgiveness in the identical AI-generated corporate apology statement for a simulated data breach scandal. While AI-generated messages hold promise in crisis communication, their impact on public perception remains understudied. The research was inspired by incidents where ChatGPT was used to generate official apology statements, raising questions about the authenticity of AI-generated apologies. Using a fictitious retail company’s apology statement, crafted with the assistance of ChatGPT, participants were randomly assigned to conditions indicating the statement was AI-aided, human-written, or unspecified (control). The results indicate that participants attributed higher levels of forgiveness intention and trust to the statement credited to humans compared to AI-generated statements. Additionally, the human-attributed statement was perceived as more empathetic and sincere than the AI-attributed statement. Mediation analysis results revealed that empathy mediated forgiveness intention and trust in human-authored statements, while perceived sincerity mediated these factors in AI-aided statements. These findings suggest that source attribution significantly influences public perception of organizational apologies during crises. This study contributes to understanding the evolving role of AI in crisis management and underscores the importance of ethical and transparent communication practices.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.