Abstract
Generalization ability is very important for pattern recognition and classification. However, the generalization error cannot be computed directly because we do not know the real input distribution and classifications of unseen samples. The Localized Generalization Error Model (L-GEM) was proposed to provide an upper bound of generalization error for unseen samples similar to training samples. The L-GEM upper bound (R∗ SM ) is computed for each output neuron of a Radial Basis Function Neural Network (RBFNN). For a multi-class classification problem, there are more than one output neurons. For a K-class problem, there will be K L-GEM values, i.e. one for each output neuron. How to use these K L-GEM values to select the architecture of a RBFNN is still an open problem. One could use average, maximum and minimum value among these K L-GEM values to estimate the overall performance of the RBFNN under investigation. All three of them are reasonable and provide some information about the generalization capability of the RBFNN. In this work, we empirically examine these three fusion methods for using L-GEM to select RBFNN architectures for four UCI datasets. Experimental results show that maximum and average fusion methods perform well.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.