Abstract

It seems inevitable that diagnostic and recommender artificial intelligence models will ultimately reach a point when they outperform human clinicians. Just as antibiotics displaced a host of medicinals for treating infections, the superior performance of such models will force their adoption. This article contemplates certain ethical and legal implications bearing on that adoption, especially because they involve a clinician's exposure to allegations of malpractice. The article discusses four relevant considerations: (1) the imperative of using explainable artificial intelligence models in clinical care, (2) specific strategies for diminishing liability when a clinician agrees or disagrees with a model's findings or recommendations but the patient nevertheless experiences a poor outcome, (3) relieving liability through legislation or regulation, and (4) comprehending such models as "persons" and therefore as potential defendants in legal proceedings. We conclude with observations on clinician-vendor relationships and argue that, although advanced artificial intelligence models have not yet arrived, clinicians must begin considering their implications now.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call