Abstract

Extreme learning machines (ELMs) are an interesting alternative to multilayer perceptrons because ELMs, in practice, require the optimization only of the number of hidden neurons by grid-search with cross-validation. Nevertheless, a large number of hidden neurons obtained after training is a significant drawback for the effective usage of such classifiers. In order to overcome this drawback, we propose a new approach to prune hidden layer neurons and achieve a fixed-size hidden layer without affecting the classifier’s accuracy and even improving it. The main idea is to leave out non-relevant or very similar neurons to others that are not necessary to achieve a simplified, yet accurate, model. In our proposal, differently from previous works based on genetic algorithms and simulated annealing, we choose only a subset of those neurons generated at random to belong to the hidden layer without weight adjustment or increasing of the hidden neurons. In this work, we compare our proposal called simulated annealing for pruned ELM (SAP-ELM) with pruning methods named optimally pruned ELM, genetic algorithms for pruning ELM and sparse Bayesian ELM. On the basis of our experiments, we can state that SAP-ELM is a promising alternative for classification tasks. We highlight that as our proposal achieves fixed-size models, it can help to solve problems where the memory consuming is crucial, such as embeded systems, and helps to control the maximum size that the models must reach after the training process.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call