Abstract

Theory of mind refers to the ability to reason explicitly about unobservable mental content of others, such as beliefs, goals, and intentions. People often use this ability to understand the behavior of others as well as to predict future behavior. People even take this ability a step further, and use higher-order theory of mind by reasoning about the way others make use of theory of mind and in turn attribute mental states to different agents. One of the possible explanations for the emergence of the cognitively demanding ability of higher-order theory of mind suggests that it is needed to deal with mixed-motive situations. Such mixed-motive situations involve partially overlapping goals, so that both cooperation and competition play a role. In this paper, we consider a particular mixed-motive situation known as Colored Trails, in which computational agents negotiate using alternating offers with incomplete information about the preferences of their trading partner. In this setting, we determine to what extent higher-order theory of mind is beneficial to computational agents. Our results show limited effectiveness of first-order theory of mind, while second-order theory of mind turns out to benefit agents greatly by allowing them to reason about the way they can communicate their interests. Additionally, we let human participants negotiate with computational agents of different orders of theory of mind. These experiments show that people spontaneously make use of second-order theory of mind in negotiations when their trading partner is capable of second-order theory of mind as well.

Highlights

  • In social settings, people often make predictions of the behavior of others by making use of their theory of mind [57]; they reason about unobservable mental content such as beliefs, desires, and intentions of others

  • The figure shows the increase in score as a result of negotiation in the Colored Trails game of both the participant and the computational trading partner, when playing against a ToM0 agent, a ToM1 agent, and a ToM2 agent

  • We have simulated interactions between computational agents to show how higher orders of theory of mind can help in obtaining better outcomes in negotiation

Read more

Summary

Introduction

People often make predictions of the behavior of others by making use of their theory of mind [57]; they reason about unobservable mental content such as beliefs, desires, and intentions of others. Without this theory of mind, an individual is limited to reasoning only about behavior, such as in the sentence “Mary is looking in the drawer”. Using second-order theory of mind, people understand sentences such as “Alice believes that Bob knows that Carol is throwing him a surprise party”, and reason about the way Alice is reasoning about Bob’s knowledge

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call