Abstract

Extreme learning machine (ELM) is a learning algorithm for single-hidden layer feedforward neural networks (SLFNs) which randomly chooses hidden nodes and analytically determines the output weights of SLFNs. but when dealing with large datasets, we need more hidden nodes to enhance training and testing accuracy, in this case, this algorithm can’t achieve high speed any more, sometimes its training can’t be executed because the bias matrix is out of memory. We focus on this issue and use the Rank Reduced Matrix (MMR) method to calculate the hidden layer output matrix, the result showed this method can not only reach much higher speed but also better improve the generalization performance whenever the number of hidden nodes is large or not.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call