Abstract

Initial weight choice has been recognized to be an important aspect of the training methodology for sigmoidal feedforward neural networks. In this paper, a new mechanism for weight initialization is proposed. The mechanism distributes the initial input to output weights in a manner that all weights (including thresholds) leading into a hidden layer are uniformly distributed in a region and the center of the region from which the weights are sampled are such that no region overlaps for two distinct hidden nodes. The proposed method is compared against random weight initialization routines on five function approximation tasks using the Resilient Backpropagation (RPROP) algorithm for training. The proposed method is shown to lead to about twice as fast convergence to a pre-specifled goal for training as compared to any of the random weight initialization methods. Moreover, it is shown that at least for these problems the networks reach a deeper minima of the error functional during training and generalizes better than the networks trained whose weights were initialized by random weight initialization methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call