Abstract

The trust game, a simple two-player economic exchange, is extensively used as an experimental measure for trust and trustworthiness of individuals. We construct deep neural network–based artificial intelligence (AI) agents to participate a series of experiments based upon the trust game. These artificial agents are trained by playing with one another repeatedly without any prior knowledge, assumption, or data regarding human behaviors. We find that, under certain conditions, AI agents produce actions that are qualitatively similar to decisions of human subjects reported in the trust game literature. Factors that influence the emergence and levels of cooperation by artificial agents in the game are further explored. This study offers evidence that AI agents can develop trusting and cooperative behaviors purely from an interactive trial-and-error learning process. It constitutes a first step to build multiagent-based decision support systems in which interacting artificial agents are capable of leveraging social intelligence to achieve better outcomes collectively. This paper was accepted by Yan Chen, behavioral economics and decision analysis. Funding: Y. (D.) Wu extends her gratitude for the financial support provided through the RSCA Seed [Grant 22-RSG-01-004] from the San Jose State University. Supplemental Material: Data are available at https://doi.org/10.1287/mnsc.2023.4782 .

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call