Abstract

In the medical domain, the uptake of an AI tool crucially depends on whether clinicians are confident that they understand the tool. Bayesian networks are popular AI models in the medical domain, yet, explaining predictions from Bayesian networks to physicians and patients is non-trivial. Various explanation methods for Bayesian network inference have appeared in literature, focusing on different aspects of the underlying reasoning. While there has been a lot of technical research, there is little known about the actual user experience of such methods. In this paper, we present results of a study in which four different explanation approaches were evaluated through a survey by questioning a group of human participants on their perceived understanding in order to gain insights about their user experience.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.