Abstract

Implementation of machine learning (ML) may be limited by patients' right to "meaningful information about the logic involved" when ML influences healthcare decisions. Given the complexity of healthcare decisions, it is likely that ML outputs will need to be understood and trusted by physicians, and then explained to patients. We therefore investigated the association between physician understanding of ML outputs, their ability to explain these to patients, and their willingness to trust the ML outputs, using various ML explainability methods. We designed a survey for physicians with a diagnostic dilemma that could be resolved by an ML risk calculator. Physicians were asked to rate their understanding, explainability, and trust in response to 3 different ML outputs. One ML output had no explanation of its logic (the control) and 2 ML outputs used different model-agnostic explainability methods. The relationships among understanding, explainability, and trust were assessed using Cochran-Mantel-Haenszel tests of association. The survey was sent to 1315 physicians, and 170 (13%) provided completed surveys. There were significant associations between physician understanding and explainability (P < .001), between physician understanding and trust (P < .001), and between explainability and trust (P < .001). ML outputs that used model-agnostic explainability methods were preferred by 88% of physicians when compared with the control condition; however, no particular ML explainability method had a greater influence on intended physician behavior. Physician understanding, explainability, and trust in ML risk calculators are related. Physicians preferred ML outputs accompanied by model-agnostic explanations but the explainability method did not alter intended physician behavior.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call