A method which modifies the objective function used far designing neural network classifiers is presented. The classical mean-square error criteria is relaxed by introducing two types of local error bias which are treated like free parameters. Open and closed form solutions are given for finding these bias parameters. The new objective function is seamlessly integrated into existing training algorithms such as back propagation (BP), output weight optimization (OWO), and hidden weight optimization (HWO). The resulting algorithms are successfully applied in training neural net classifiers having a linear final layer. Classifiers are trained and tested on several data sets from pattern recognition applications. Improvement over classical iterative regression methods is clearly demonstrated.
Read full abstract