Abstract

In this paper, we present neurule-based inference and explanation mechanisms. A neurule is a kind of integrated rule, which integrates a symbolic rule with neurocomputing: each neurule is considered as an adaline neural unit. Thus, a neurule base consists of a number of autonomous adaline units (neurules), expressed in a symbolic oriented syntax. There are two inference processes for neurules: the connectionism-oriented process, which gives pre-eminence to neurocomputing approach, and the symbolism-oriented process, which gives pre-eminence to a symbolic backwards chaining like approach. Symbolism-oriented process is proved to be more efficient than the connectionism-oriented one, in terms of the number of required computations (56.47–63.88% average reduction) and the mean runtime gain (59.95–64.89% in average), although both require almost the same average number of input values. The neurule-based explanation mechanism provides three types of explanations: ‘how’ a conclusion was derived, ‘why’ a value for a specific input variable was asked from the user and ‘why-not’ a variable has acquired a specific value. As shown by experiments, the neurule-based explanation mechanism is superior to that provided by known connectionist expert systems, another neuro-symbolic integration category. It provides less in number (64.38–69.28% average reduction) and more natural explanation rules, thus increasing efficiency (mean runtime gain of 56.65–56.73% in average) and comprehensibility of explanations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call