Abstract
The number of hidden neurons has a great influence on the generalization capability of Multilayer Perceptron Neural Network (MLPNN). The ultimate goal of building a MLPNN is to recognize (or generalize) future unseen sample correctly based on the training from training samples. Therefore, the Localized Generalization Error Model (L-GEM) is adopted in this work to select the architecture of a MLPNN. The L-GEM has been successfully applied to Radial Basis Function Neural Network (RBFNN) architecture selection, feature selection and other applications. In this work, we propose a new L-GEM for MLPNN and demonstrate its application in architecture selection for MLPNN. Experimental results show that the L-GEM based MLPNN architecture selection method outperforms several off-the-shelf methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.