Abstract

This paper presents the results of a comparative study on the impact of various error functions in multilayer feedforward neural networks used for classification problems. The objective is the comparison of the properties of the error functions, both in terms of the training speed and the generalization ability. In an effort to avoid complexities introduced by more advanced learning algorithms, the simple backpropagation with momentum algorithm has been employed. A member of classification problems were solved with neural networks that have been trained with the usual mean square error function, the mean absolute error function, the cross-entropy or maximum likelihood function, the Kalman-Kwasny error function (1991), as well as a novel error function designed by the authors. The results indicate that, in most problems examined, an error function other than the usual mean square gives a better performance, both in terms of the number of epochs needed for training, as well as the obtained generalization ability of the trained network.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.