Abstract
Decision making under uncertainty in multiagent settings is of increasing interest in decision science. The degree to which human agents depart from computationally optimal solutions in socially interactive settings is generally unknown. Such understanding provides insight into how social contexts affect human interaction and the underlying contributions of Theory of Mind. In this paper, we adapt the well-known ‘Tiger Problem’ from artificial-agent research to human participants in solo and interactive settings. Compared to computationally optimal solutions, participants gathered less information before outcome-related decisions when competing than cooperating with others. These departures from optimality were not haphazard but showed evidence of improved performance through learning. Costly errors emerged under conditions of competition, yielding both lower rates of rewarding actions and accuracy in predicting others. Taken together, this work provides a novel approach and insights into studying human social interaction when shared information is partial.
Highlights
Decision making under uncertainty in multiagent settings is of increasing interest in decision science
Formal computational models of agent actions can identify optimal sequences of exploration and/or option selection. These models will allow for robust artificial intelligence (AI) systems that can pair with human agents in contexts of competition and cooperation
The Tiger Problem is an iconic challenge as AI seeks to develop sophisticated models for planning under uncertainty and especially seeks to achieve adaptive interactions with human agents[22,27]
Summary
Decision making under uncertainty in multiagent settings is of increasing interest in decision science. One especially needs to know how these differences are affected by changes in the competitive and cooperative environments which in turn influence human state representation and valuation of gains and losses for self[9,10] and others[11] Progress in meeting these requirements will involve studying how human agents act under uncertainty in the same simulated partially observable tasks that are used to advance robotic and other computational agents in multiagent interactions. One of the limitations of the original Tiger Problem is that it was incapable of addressing multiagent contexts in which agents must develop models of and be sensitive to other agents’ representations of the environment These contexts require some way of formally modeling the models of other agents, that is, modeling a “theory of mind” (ToM25), especially in situations where both the environment and other agents’ actions constitute critical uncertainties in decision making. To overcome this limitation we introduced the interactive Tiger Problem (ITP), along with a modeling s olution[26]
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.