Abstract

The initial successes in recent years in harnessing machine learning (ML) technologies to improve medical practice and benefit patients have attracted attention in a wide range of healthcare fields. Particularly, it should be achieved by providing automated decision recommendations to the treating clinician. Some hopes placed in such ML-based systems for healthcare, however, seem to be unwarranted, at least partially because of their inherent lack of transparency, although their results seem convincing in accuracy and reliability. Skepticism arises when the physician as the agent responsible for the implementation of diagnosis, therapy, and care is unable to access the generation of findings and recommendations. There is widespread agreement that, generally, a complete traceability is preferable to opaque recommendations; however, there are differences about addressing ML-based systems whose functioning seems to remain opaque to some degree—even if so-called explicable or interpretable systems gain increasing amounts of interest. This essay approaches the epistemic foundations of ML-generated information specifically and medical knowledge generally to advocate differentiations of decision-making situations in clinical contexts regarding their necessary depth of insight into the process of information generation. Empirically accurate or reliable outcomes are sufficient for some decision situations in healthcare, whereas other clinical decisions require extensive insight into ML-generated outcomes because of their inherently normative implications.

Highlights

  • The period in which the amount of medical knowledge doubles is increasingly becoming shorter: it was estimated that it took approximately 50 years in 1950 for our knowledge to double; that time had shortened to only 7 years in 1980, just under 4 years in 2010, and approximately 73 days in 2020

  • By examining the epistemic foundations of medical knowledge, this paper aims to explain why, even though accuracy is often prima facie sufficient in medical contexts, deeper transparency in machine learning (ML)-generated information is normatively necessary in some medical decisions, even if it may sometimes remain out of reach

  • With numerous tests performed on the opaque system, local interpretable model-agnostic explanations (LIME) approximates what happens with the individual outcome when the underlying dataset fed into the black box is altered several times

Read more

Summary

Introduction

The period in which the amount of medical knowledge doubles is increasingly becoming shorter (cf. Densen, 2011): it was estimated that it took approximately 50 years in 1950 for our knowledge to double; that time had shortened to only 7 years in 1980, just under 4 years in 2010, and approximately 73 days in 2020. Some authors have emphasized that users—in this case, physicians—need to understand in a certain way why an algorithm delivers its outcome to maintain the patients’ trust in its resulting decisions They demand that ML applications provide explicable or interpretable processes and/or outcomes (cf Bjerring & Busch, 2021; Heinrichs & Eickhoff, 2020; Holzinger et al, 2020; Rudin & Radin, 2019; Tsamados et al, 2021). Medical knowledge and information generated by ML do not differ categorically, but the latter manages to represent only part of the former This results in significant normative and communicative implications for the deliberative doctor-patient relationship A short conclusion summarizes the relevant findings and provides perspectives on subsequent questions (Sect. 5)

Machine Learning in Healthcare Contexts: a Short Overview
Page 4 of 20
Resolving the Epistemic Gap
Page 6 of 20
The Urgent Call for Explicable or Interpretable Algorithms
Page 8 of 20
Page 10 of 20
Page 12 of 20
Page 14 of 20
Page 16 of 20
Conclusion
Page 18 of 20
Findings
Page 20 of 20

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.