Abstract

ABSTRACT This paper develops an account of how the implementation of ML models into healthcare settings requires revising the methodological apparatus of philosophical bioethics. On this account, ML models are cognitive interventions that provide decision-support to physicians and patients. Due to reliability issues, opaque reasoning processes, and information asymmetries, ML models pose inferential problems for them. These inferential problems lay the grounds for many ethical problems that currently claim centre-stage in the bioethical debate. Accordingly, this paper argues that the best way to make progress in remedying these ethical problems is to distil their epistemic core and to identify appropriate epistemic norms as guardrails. The viability of this approach will be highlighted based on four key issues: trust, responsibility, paternalism, and fairness. In that respect, the paper also contributes to a more pronounced view of the specific methodological challenges for ethical theorizing, when the point of reference are complex statistical models that make predictions, rather than individual agents, acting for reasons.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call