Abstract
There is an increasing demand for opaque intelligent systems to explain themselves to humans, in order to increase user trust and the formation of adequate mental models. Previous research has shown effects of different types of explanations on user preferences and performance. However, this research has not addressed the differential effects of intentional and causal explanations on both users’ trust and mental models, nor has it employed multiple trust measurement scales at multiple points in time. In the current research, the effects of three types of explanations (causal, intentional, mixed) on trust development, mental models, and user satisfaction were investigated in the context of a self-driving car. Results showed that participants were least satisfied with causal explanations, that intentional explanations were most effective in establishing high levels of trust, and that mixed explanations led to the best functional understanding of the system and resulted in the least changes in trust over time.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Proceedings of the Human Factors and Ergonomics Society Annual Meeting
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.