Abstract
A method which modifies the objective function used far designing neural network classifiers is presented. The classical mean-square error criteria is relaxed by introducing two types of local error bias which are treated like free parameters. Open and closed form solutions are given for finding these bias parameters. The new objective function is seamlessly integrated into existing training algorithms such as back propagation (BP), output weight optimization (OWO), and hidden weight optimization (HWO). The resulting algorithms are successfully applied in training neural net classifiers having a linear final layer. Classifiers are trained and tested on several data sets from pattern recognition applications. Improvement over classical iterative regression methods is clearly demonstrated.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: International Journal on Artificial Intelligence Tools
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.