Abstract

In this paper, a new neural network capable of extracting knowledge from empirical data [1]–[6] is presented. The network utilizes the idea proposed in [2] and developed in [3,4]. Two variants of the network are shown that differ in relationships describing activation functions of neurons in the network. One variant utilizes logarithmic and exponential functions as the activation ones and the other is based on reciprocal activation functions. The first network variant is similar that proposed in [3]. The difference is that in our network the logarithmic activation function works with hidden layer neurons while in [3] with input signals. In the second variant, all activation functions are of 1/x type. To the author’s knowledge, such a network has not been published in the literature so far. Like that of [3], our network provides a real valued symbolic relationship between input and output signals, resulting from numerical data describing the signals. The relationship is a continuous function created on the basis of a given set of input–output numerical data when learning the network. Extraction of the symbolic function expression is carried out after the training in finished. By forming the symbolic expression, the neural network structure and synaptic connection weights associated with the neurons are taken into account. The ability of knowledge extraction, also called law discovery, is a consequence of applying proper activation functions of neurons included in hidden and output layers of the network. The neural network under consideration can also play an inverse role to the above mentioned. Instead of extracting the symbolic relation, it can also be used as a neural realization of continuous functions expressed in a symbolic way. The presented theory is illustrated by an example.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call