Abstract

With increasing deployment of complex and opaque machine learning algorithms (black boxes) to make decisions in areas that profoundly affect individuals such as underwriting, judicial sentencing, and robotic driving, are increasing calls for explanations of how they make these decisions to assure that they are accurate, objective, and fair. Black boxes are also being increasingly explored and deployed in real world pharmacovigilance, so an understandable question is whether such explainability is important in this domain as well. “Explainable artificial intelligence (AI)” refers to a set of tools that aim to provide understandable approximations, hypothesis, or more precise traces of the “inner thoughts” of black boxes. We consider whether and how general arguments made for explainable AI, such as building trust and gaining scientific insights, apply to pharmacovigilance including an explication of the limitations in these arguments and the methods themselves. Given the multiple application domains within pharmacovigilance, the answer to the question is situation dependent. If the field of explainable AI advances to the point of consistently providing high quality explanations as testable hypothesis, explainable artificial intelligence should be a credible addition to the pharmacovigilance toolkit for model development, signal management and clinical pharmacovigilance. Its incremental contribution to broad trust building per se, though widely touted in general, is a dubious argument for pharmacovigilance. This article is protected by copyright. All rights reserved.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call