Abstract
Models developed using machine learning are increasingly prevalent in scientific research. At the same time, these models are notoriously opaque. Explainable AI aims to mitigate the impact of opacity by rendering opaque models transparent. More than being just the solution to a problem, however, Explainable AI can also play an invaluable role in scientific exploration. This paper describes how post-hoc analytic techniques from Explainable AI can be used to refine target phenomena in medical science, to identify starting points for future investigations of (potentially) causal relationships, and to generate possible explanations of target phenomena in cognitive science. In this way, this paper describes how Explainable AI—over and above machine learning itself—contributes to the efficiency and scope of data-driven scientific research.
Highlights
Models developed using machine learning (“ML models”) are increasingly prevalent in scientific research
It is becoming increasingly clear that XAI techniques can be used to great effect in engineering (Doran et al, 2017; Hohman et al, 2018; Ribeiro et al, 2016) and AI governance (Goodman & Flaxman 2017; Wachter et al, 2018), it remains uncertain whether, and if so how, Explainable AI can be used in scientific research. This paper addresses this uncertainty by considering one specific way in which Explainable AI can contribute to scientific research
Recent technical and philosophical discussions recognize the problem that opacity poses to the use of such models, and some of these discussions have begun to reflect on the possibility of solving this problem through the use of Explainable AI
Summary
Models developed using machine learning (“ML models”) are increasingly prevalent in scientific research. A central aim of this research program is to develop and deploy post-hoc analytic techniques with which to answer questions about what opaque models are doing, why they do what they do, and how they work (Zednik, 2019) These techniques are becoming increasingly familiar to philosophers, the possibilities and limits of Explainable AI remain underexplored. As efforts to increase the transparency of ML models proceed in other domains, it seems likely that the role of Explainable AI will be increasingly felt in scientific research as well For this reason, philosophers of science should pay attention to the various roles that XAI techniques can play in scientific research, and the present discussion is a first attempt at doing so.. More than being just a solution to the problem that opacity poses, Explainable AI possesses unique epistemic qualities that are likely to make it a significant driver of scientific exploration in the future
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.