Abstract

Abstract Intelligent systems that can effectively collaborate with human users can potentially expand human decision-making capabilities in numerous domains. An important factor that determines the effectiveness of these intelligent systems is trust from human users. How much a user should trust an intelligent system to maximize the benefits is an open question. In this paper, we present a quantitative analysis of the impact of trust on the collaboration between a human user and an intelligent decision support system (DSS) in binary classification problems. Using an agent-based simulation model, we represent trust as a static quantity averaged over a set of Monte Carlo simulations calculated based on a user’s self-confidence, confidence in a DSS, and agents’ expertise. Our results show the optimal level of self-confidence and confidence in a DSS needed to maximize the collaboration performance under different problem scenarios. The results indicate that with such an optimal level of confidence, the collaboration performance can exceed the performance of the individual agents alone. Further, our results also show that having a concentrated expertise on particular types of problems is more beneficial than being somewhat knowledgeable in multiple problems given that the expertise of the user and the DSS complement each other.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call