Abstract

There is an increasing demand for opaque intelligent systems to explain themselves to humans, in order to increase user trust and the formation of adequate mental models. Previous research has shown effects of different types of explanations on user preferences and performance. However, this research has not addressed the differential effects of intentional and causal explanations on both users’ trust and mental models, nor has it employed multiple trust measurement scales at multiple points in time. In the current research, the effects of three types of explanations (causal, intentional, mixed) on trust development, mental models, and user satisfaction were investigated in the context of a self-driving car. Results showed that participants were least satisfied with causal explanations, that intentional explanations were most effective in establishing high levels of trust, and that mixed explanations led to the best functional understanding of the system and resulted in the least changes in trust over time.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call