Abstract

In this article, we investigate the relative performance of artificial neural networks and structural models of decision theory by training 69 artificial intelligence models on a dataset of 7080 human decisions in extensive form games. The objective is to compare the predictive power of AIs that use a representation of another agent’s decision-making process in order to improve their own performance during a strategic interaction. We use human game theory data for training and testing. Our findings hold implications for understanding how AIs can use constrained structural representations of other decision makers, a crucial aspect of our ‘Theory of Mind’. We show that key psychological features, such as the Weber–Fechner law for economics, are evident in our tests, that simple linear models are highly robust, and that being able to switch between different representations of another agent is a very effective strategy. Testing different models of AI-ToM paves the way for the development of learnable abstractions for reasoning about the mental states of ‘self’ and ‘other’, thereby providing further insights for fields such as social robotics, virtual assistants, and autonomous vehicles, and fostering more natural interactions between people and machines.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call