Abstract

As a pilot study to the development of generalization error of multilayer perceptron neural network (MLPNN), we examine several error functions as selection criteria for MLPNN architecture selection. In addition, a brief survey on current MLPNN architecture selection methods is given. The average weighted F-score is widely used in information extraction field and we will adopt it as the criterion of MLPNN performance evaluation. In this paper, we first use an exhaustive MLPNN architecture selection method to investigate different MLPNN architecture selection criteria when the performance of the MLPNN is evaluated by the average weighted F-Score on testing data. Then we adopt a genetic algorithm (GA) based MLPNN architecture selection method to reduce the running time of MLPNN architecture selection. Experimental results show that the MLPNN architectures selected by the GA with "Train MSE" yield the best testing average weighted F-score with acceptable number of hidden neurons and running time.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.