Abstract

An effective training algorithm called extreme learning machine (ELM) has recently proposed for single hidden layer feedforward neural networks (SLFNs). It randomly chooses the input weights and hidden layer biases, and analytically determines the output weights by a simple matrix-inversion operation. This algorithm can achieve good performance at extremely high learning speed However, it may require a large number of hidden units due to non-optimal input weights and hidden layer biases. In this paper, we propose a new approach, evolutionary least-squares extreme learning machine (ELS-ELM), to determine the input weights and biases of hidden units using the differential evolution algorithm in which the initial generation is generated not by random selection but by a least squares scheme. Experimental results for function approximation show that this approach can obtain good generalization performance with compact networks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call