Abstract

The hidden layer neurons in a multi-layered feed-forward neural network serve a critical role. From one perspective, the hidden layer neurons establish (linear) decision boundaries in the feature space. These linear decision boundaries are then combined by succeeding layers leading to convex-open and thereafter arbitrarily shaped decision boundaries. In this paper we show that the use of unidirectional Gaussian lateral connections from a hidden layer neuron to an adjacent hidden layer leads to a much richer class of decision boundaries. In particular the proposed class of networks has the advantage of sigmoidal feed-forward networks (global characteristics) but with the added flexibility of being able to represent local structure. An algorithm to train the proposed network is presented and its training and validation performance shown using a simple classification problem.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call