Abstract

The ability of a neural network to represent an input-output mapping is usually only measured in terms of the data fit according to some error criteria. This ‘black box’ approach provides little understanding of the network representation or how it should be structured. This paper investigates the topological structure of multilayer feedforward neural networks (MFNN) and explores the relationship between the numbers of neurons in the hidden layers and finite dimensional topological spaces. It is shown that a class of three layer (two hidden layer) neural networks is equivalent to a canonical form approximation of nonlinearity. This theoretical framework leads to insights about the architecture of multilayer feedforward neural networks, confirms the common belief that three layer (two hidden layer) feedforward networks are sufficient for general application and yields an approach for determining the appropriate numbers of neurons in each hidden layer.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call