Abstract

Extreme learning machines (ELMs) are commonly adopted as base learners in ensemble methods due to unstable results and fast learning speed. However, most existing ensemble structures require explicit aggregation of ELM base learners’ results before the final decision. This study proposes a novel Biological Ensemble ELMs called BE-ELM from a biological perspective for the first time and exploit the superiority of assembling multiple ELM base learners in parallel without the need for explicit aggregation, and thus simplifying the learning procedure. BE-ELM’s structure is inspired by recent MIT neuroscience findings of brain learning mechanisms. The expression of the analytical solution of BE-ELM is similar to that of basic ELMs, which allows it to inherit their fast learning speed in the ensemble structure. Moreover, BE-ELM also has the added advantage of superior performance. Our theoretical analysis shows that BE-ELM consisting of multiple ELM base learners built on subsets of the original features is equivalent to a more complex ELM on the original feature space. We prove that BE-ELM, even without any explicit aggregation, can guarantee enhanced generalization capabilities. To confirm our findings, we conducted extensive experiments on various datasets. The results demonstrate that BE-ELM outperforms traditional ELMs and other state-of-the-art ensemble ELMs in terms of generalization performance on most datasets. These findings suggest that BE-ELM has the potential to improve prediction outcomes in practical applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call