Abstract

A simple dynamic model of a neural network is presented. Using the dynamic model of a neural network, we improve the performance of a three-layer multilayer perceptron (MLP). The dynamic model of a MLP is used to make fundamental changes in the network optimization strategy. These changes are: Neuron activation functions are used, which reduce the probability of singular Jacobians; Successive regularization is used to constrain the volume of the weight space being minimized; Boltzmann pruning is used to constrain the dimension of the weight space; and prior class probabilities are used to normalize all error calculations, so that statistically significant samples of rare but important classes can be included without distortion of the error surface. All four of these changes are made in the inner loop of a conjugate gradient optimization iteration and are intended to simplify the training dynamics ofthe optimization. On handprinted digits and fingerprint classification problems, these modifications improve error-reject performance by factors between 2 and 4 and reduce network size by 40 to 60%.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call