Abstract

Is dishonesty more prevalent in interactions with chatbots compared to humans? Amidst the rise of artificial intelligence, this question holds significant economic implications. We conduct a novel experiment where participants report the outcome of a private, payout-relevant random draw to either a chatbot or a human counterpart, with varying degrees of signaled agency. We find that signaling agency increases honesty when interacting with humans but not with chatbots. Moreover, participants are consistently more honest with humans in the presence of agency cues. Our results suggest that social image concerns and perceived honesty norms play a more prominent role in human interactions. Surprisingly, standard online forms generate the same levels of honesty as human-to-human chat interactions. These findings offer valuable insights for designing effective communication and trust-building mechanisms in digital economies where human-chatbot interactions are increasingly prevalent.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.