Abstract

Is dishonesty more prevalent in interaction with a machine as opposed to a human? We analyze this question using an innovative experimental setup involving the reporting of an unobserved payout-relevant random draw either to a chatbot or another human in a chat interaction while also varying the degree of agency. We find that reporting to a chatbot that is unable to demonstrate agency induces the lowest levels of honesty, whereas reporting to a human, which can demonstrate such agency, generates the highest levels of honesty. We identify a stronger role of social-image concerns and social norms when a person interacts with another human and show that subjects abstain from lying more when they have more time to reflect on their behavior. Our results have implications for designing efficient means of interaction between consumers and organizations in a variety of different contexts in the digital economy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call