Abstract

The paper proposes a new information-theoretic method to improve the generalization performance of multi-layered neural networks, called self-organized mutual information maximization learning. In the method, the self-organizing map (SOM) is successively applied to give the knowledge to the subsequent multi-layered neural networks. In this process, the mutual information between input patterns and competitive neurons is forced to increase by changing the spread parameter. Though several methods to increase information have been proposed in multi-layered neural networks, the present paper is the first to confirm that mutual information play important roles in learning in multi-layered neural networks and how to compute the mutual information. The method was applied to the extended Senate data. In the experiments, it is examined whether mutual information is actually increased by the present method, because mutual information can be seemingly increased by changing the spread parameter. Experimental results shows that even if the parameter responsible for changing mutual information was fixed, mutual information could be increased. This means that neural networks can be organized so as to store information content on input patterns by the present method. In addition, it could be observed that generalization performance was much improved by this increase in mutual information.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call