Abstract

The Extreme Learning Machine (ELM) is a recent algorithm for training single-hidden layer feedforward neural networks (SLFN) which has shown promising results when compared with other usual tools. ELM randomly chooses weights and biases of hidden nodes and analytically obtains the output weights and biases. It constitutes a very fast algorithm with a good generalization performance in most cases. Since the original ELM was presented, several papers have been published using similar ideas, EI-ELM, OP-ELM, OS-ELM, EM-ELM, etc. In this paper, we present a bi-objective micro genetic ELM (μG-ELM). This algorithm, instead of considering random hidden weights and biases, generates them by means of a micro genetic algorithm. It is conducted considering two objectives, the number of hidden nodes and the mean square error (MSE). Furthermore, as a novelty, μG-ELM incorporates a regression model in order to decide whether the number of hidden nodes should be increased or decreased. The proposed algorithm reaches similar errors but involves, in general, a smaller number of hidden nodes, while maintaining competitive execution time.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call