Abstract

A coding method, distributed normalisation, is presented to speed up the training process of a back-propagation neural network classifier. In contrast to one-node normalisation coding, the values of the feature variables are distributed over a number of input nodes to increase the representation range of certain parts of each feature variable. A distinct advantage of this coding method is its ability to maintain the generalisation capability of one-node normalisation coding.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call