Abstract

Implicit stochastic models, including both ‘deep neural networks’ (dNNs) and the more recent unsupervised foundational models, cannot be explained. That is, it cannot be determined how they work, because the interactions of the millions or billions of terms that are contained in their equations cannot be captured in the form of a causal model. Because users of stochastic AI systems would like to understand how they operate in order to be able to use them safely and reliably, there has emerged a new field called ‘explainable AI’ (XAI). When we examine the XAI literature, however, it becomes apparent that its protagonists have redefined the term ‘explanation’ to mean something else, namely: ‘interpretation’. Interpretations are indeed sometimes possible, but we show that they give at best only a subjective understanding of how a model works. We propose an alternative to XAI, namely certified AI (CAI), and describe how an AI can be specified, realized, and tested in order to become certified. The resulting approach combines ontologies and formal logic with statistical learning to obtain reliable AI systems which can be safely used in technical applications.

Highlights

  • This paper focuses on the question of how explainable Artificial Intelligence (AI) and attitude bias, and other concerns on this list, can be addressed

  • We have seen that implicit stochastic models do not allow explanations, but rather only one or other sort of interpretation in the sense of hermeneutics

  • Who among us would be satisfied with a pacemaker or insulin pump whose operations are merely interpreted? The very idea that one might be willing to use technical production systems in mission-critical applications using end-to-end stochastic AI is in any case absurd

Read more

Summary

Introduction

Because the models are created implicitly, and because of their huge size, there is no way in which the processes by which they estimate a stochastic output ŷ from an input vector x could be made explicit and understandable for example to a human being This situation has been still further aggravated in more recent times by the development of so-called foundation models [4], which are, unlike supervised regression models or supervised dNNs, unsupervised. No matter how they are obtained, they are in every case either a functional or an operator consisting—in the unfolded equational view that can always be obtained from the network view—of an equation with billions of terms and parameters for which it is, again, impossible to tell how they create the output estimate ŷ from a given input x For these reasons, certain aspects of using dNNs in production systems in any sector of the economy, including the public sector, have been identified as possible areas of concern.

Implicit Stochastic Model Explanation and Interpretation
Attempts at Model Interpretation
Classic Types of Model Interpretation
Local Interpretation
Global Interpretation
Reasons for Model Interpretation Failure
Deep Reasons for Deep Model Explanation Failure
Attitude Bias in Statistical Learning
Certified AI
Priors
Other Approaches to Enhance Prior Knowledge in AI Appliactions
Legal and Ethical Aspects of Certified AI
Findings
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.