Abstract

Explainability is key to enhancing the trustworthiness of artificial intelligence in medicine. However, there exists a significant gap between physicians’ expectations for model explainability and the actual behavior of these models. This gap arises from the absence of a consensus on a physician-centered evaluation framework, which is needed to quantitatively assess the practical benefits that effective explainability should offer practitioners. Here, we hypothesize that superior attention maps, as a mechanism of model explanation, should align with the information that physicians focus on, potentially reducing prediction uncertainty and increasing model reliability. We employed a multimodal transformer to predict lymph node metastasis of rectal cancer using clinical data and magnetic resonance imaging. We explored how well attention maps, visualized through a state-of-the-art technique, can achieve agreement with physician understanding. Subsequently, we compared two distinct approaches for estimating uncertainty: a standalone estimation using only the variance of prediction probability, and a human-in-the-loop estimation that considers both the variance of prediction probability and the quantified agreement. Our findings revealed no significant advantage of the human-in-the-loop approach over the standalone one. In conclusion, this case study did not confirm the anticipated benefit of the explanation in enhancing model reliability. Superficial explanations could do more harm than good by misleading physicians into relying on uncertain predictions, suggesting that the current state of attention mechanisms should not be overestimated in the context of model explainability.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.