Abstract

The present paper aims to propose a new type of learning method to interpret the inference mechanism of neural networks by solving an aspect of the vanishing information problem, namely, truncated information, whereby input information cannot be transmitted because of truncated hidden layers. This phenomenon tends to occur when the number or the strength of connection weights is forced to be smaller for obtaining a small number of important ones selectively for improved interpretation as well as generalization. Thus, it is necessary to make a compromise between increased selectivity and smooth information transmission in multi-layered neural networks. A compromise is made by the local selectivity control or by restricting the selection range for connection weights, and by weakening the selectivity. The new method was applied to three data sets, namely, the symmetric data set, the bankruptcy data set, and the company performance data set. In all three experiments, we tried to show that the new method could reveal how neural networks produced targets with better generalization. The first set, the symmetric data, aimed to show how the new method could acquire necessary features for classification according to the different selectivity levels. The bankruptcy data set tried to demonstrate that the new method revealed the main important variables, based on correlations between inputs and targets. Finally, the third example, the company performance data set, clarified some relations between the performance of companies and their top messages.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call