Abstract

Explanation is an important function in symbolic artificial intelligence (AI). For instance, explanation is used in machine learning, in case-based reasoning and, most important, the explanation of the results of a reasoning process to a user must be a component of any inference system. Experience with expert systems has shown that the ability to generate explanations is absolutely crucial for the user acceptance of Al systems. In contrast to symbolic systems, neural networks have no explicit, declarative knowledge representation and therefore have considerable difficulties in generating explanation structures. In neural networks, knowledge is encoded in numeric parameters (weight) and distributed all over the system. It is the intention of this paper to discuss the ability of neural networks to generate explanations. It will be shown that connectionist systems benefit from the explicit coding of relations and the use of highly structured networks in order to allow explanation and explanation components (ECs). Connectionist semantic networks (CSNs), i.e. connectionist systems with an explicit conceptual hierarchy, belong to a class of artificial neural networks which can be extended by an explanation component which gives meaningful responses to a limited class of “How” questions. An explanation component of this kind is described in detail.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call