Abstract
Theory of mind refers to the human ability to reason about the mental content of other people, such as their beliefs, desires, and goals. People use their theory of mind to understand, reason about, and explain the behaviour of others. Having a theory of mind is especially useful when people collaborate, since individuals can then reason on what the other individual knows as well as what reasoning they might do. Similarly, hybrid intelligence systems, where AI agents collaborate with humans, necessitate that the agents reason about the humans using computational theory of mind. However, to try to keep track of all individual mental attitudes of all other individuals becomes (computationally) very difficult. Accordingly, this paper provides a mechanism for computational theory of mind based on abstractions of single beliefs into higher-level concepts. These abstractions can be triggered by social norms and roles. Their use in decision making serves as a heuristic to choose among interactions, thus facilitating collaboration. We provide a formalization based on epistemic logic to explain how various inferences enable such a computational theory of mind. Using examples from the medical domain, we demonstrate how having such a theory of mind enables an agent to interact with humans effectively and can increase the quality of the decisions humans make.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have