Abstract

The large-scale adoption of systems that automate classifications using Machine Learning (ML) algorithms raises pressing challenges as they support or make decisions with profound consequences for human beings. It is important to understand how users’ trust is affected by ML models’ suggestions, even when those models are wrong. Many research efforts have focused on the user’s ability to interpret what a model has learned. In this paper, we seek to understand another aspect of ML interpretability: whether and how the presence of classification probabilities and their different distributions are related to users’ trust in model outcomes, especially in ambiguous instances. To this end, we conducted two online surveys in which we asked participants to evaluate their agreement with image classifications of pictures of animals made by an ML model. In the first, we analyze their trust before and after presenting them the model classification probabilities. In the second, we investigate the relationships between class probability distributions and users’ trust in the model. We found that, in some cases, the additional information is correlated with undue trust in the model’s classifications. However, in others, they are associated with inappropriate skepticism. • Additional information about model decisions may be detrimental to users’ trust. • Disclosure of additional information about model decisions may lead to overtrust. • Distributions of class probabilities may have an impact on users’ appropriate trust.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call