Abstract
Abstract Extreme learning machine (ELM) can be considered as a single-hidden layer feedforward neural network (FNN)-type learning system, whose input weights and hidden layer biases are randomly assigned, while output weights need tuning. In the framework of regression, a fundamental problem of ELM learning is whether the ELM estimator is universally consistent, that is, whether it can approximate arbitrary regression function to any accuracy, provided the number of training samples is sufficiently large. The aim of this paper is two-fold. One is to verify the strongly universal consistency of the ELM estimator, and the other is to present a sufficient and the necessary condition for the activation function, where the corresponding ELM estimator is strongly universally consistent. The obtained results underlie the feasibility of ELM and provide a theoretical guidance of the selection of activation functions in ELM learning.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.