Abstract
The initial localized generalization error model (LGEM) aims to find an upper bound of error between a target function and a radial basis function neural network (RBFNN) within a neighborhood of the training samples. The contribution of LGEM can be briefly described as that the generalization error is less than or equal to the summation of three terms: training error, stochastic sensitivity measure (SSM), and a constant. This paper extends the initial LGEM to a new LGEM model for single-hidden layer feed-forward neural networks (SLFNs) trained with extreme learning machine (ELM) which is a type of new training algorithms without iterations. The development of this extended LGEM can provide some useful guidelines for improving the generalization ability of SLFNs trained with ELM. An algorithm for architecture selection of the SLFNs is also proposed based on the extended LGEM. Experimental results on a number of benchmark data sets show that an approximately optimal architecture in terms of number of neurons of a SLFN can be found using our method. Furthermore, the experimental results on eleven UCI data sets show that the proposed method is effective and efficient.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.