Abstract

Ensemble learning aims to improve the generalization power and the reliability of learner models through sampling and optimization techniques. It has been shown that an ensemble constructed by a selective collection of base learners outperforms favorably. However, effective implementation of such an ensemble from a given learner pool is still an open problem. This paper presents an evolutionary approach for constituting extreme learning machine (ELM) ensembles. Our proposed algorithm employs the model diversity as fitness function to direct the selection of base learners, and produces an optimal solution with ensemble size control. A comprehensive comparison is carried out, where the basic ELM is used to generate a set of neural networks and 12 benchmarked regression datasets are employed in simulations. Our reporting results demonstrate that the proposed method outperforms other ensembling techniques, including simple average, bagging and adaboost, in terms of both effectiveness and efficiency.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.