Abstract

Explanation is an important function in symbolic artificial intelligence (AI). For example, explanation is used in machine learning and for the interpretation of prediction failures in case-based reasoning. Furthermore, the explanation of results of a reasoning process to a user who is not a domain expert must be a component of any inference system. Experience with expert systems has shown that the ability to generate explanations is absolutely crucial for the user-acceptance of AI systems (Davis, Buchanan & Shortliffe 1977). In contrast to symbolic systems, neural networks have no explicit, declarative knowledge representation and therefore have considerable difficulties in generating explanation structures. In neural networks, knowledge is encoded in numeric parameters (weights) and distributed all over the system.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call