Abstract

This paper studies how our previously proposed initialization effects the rule extraction of neural networks by structural learning with forgetting. The proposed initialization consists of two steps: (1) initializing weights of hidden units so that their separation hyperplanes should pass through the center of an input pattern set and (2) initializing those of output units to zero. From simulation results on Boolean function discovery problems with 5 and 7 inputs, it has been confirmed that the proposed initialization yields a simpler network structure and higher rule extraction ability than the conventional initialization giving uniform random number to all the initial weights of the network.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call