Abstract

The multilayer perceptron (MLP) is a widely used neural network architecture, but it suffers from the fact that its knowledge representation is not readily interpreted. Hidden neurons take the role of feature detectors, but the popular learning algorithms (back propagation of error, for example) coupled with random starting weights mean that the function implemented by a trained MLP can be difficult to analyse. This paper proposes a method for understanding the structure of the function learned by MLPs that model functions of the class f : f􀀀1;1gn ! Rm. The approach characterises a given MLP using Walsh functions, which make the interactions among subsets of variables explicit. Demonstrations of this analysis used to monitor complexity during learning, understand function structure and measure the generalisation ability of trained networks are presented.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call