Abstract

Backpropagation learning (BP) is known for its serious limitations in generalizing knowledge from certain types of learning material. In this paper, we describe a new learning algorithm, BP-SOM, which overcomes some of these limitations as is shown by its application to four benchmark tasks. BP-SOM is a combination of a multi-layered feedforward network (MFN) trained with BP and Kohonen's self-organizing maps (SOMs). During the learning process, hidden-unit activations of the MFN are presented as learning vectors to SOMs trained in parallel. The SOM information is used when updating the connection weights of the MFN in addition to standard error backpropagation. The effect of the augmented error signal is that, during learning, clusters of hiddenunit activation patterns of instances associated with the same class tend to become highly similar. In a number of experiments, BP-SOM is shown (i) to improve generalization performance (i.e. avoid overfitting); (ii) to increase the amount of hidden units that can be pruned without loss of generalization performance and (iii) to provide a means for automatic rule extraction from trained networks. The results are compared with results achieved by two other learning algorithms for MFNs: conventional BP and BP augmented with weight decay. From the experiments and the comparisons, we conclude that the hybrid BP-SOM architecture, in which supervised and unsupervised and learning co-operate in finding adequate hidden-layer representations, successfully combines the advantages of supervised and unsupervised learning.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call